Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

I'm Teresa Araújo and I'm a PhD student and researcher at INESC TEC and Faculdade de Engenharia da Universidade do Porto (FEUP).

My master degree is in Bioengineering (FEUP), specifically in the field of Biomedical Engineering. 

I am mainly interested in the fields of computer vision, machine learning and medical image analysis. My current research topic is the grading of diabetic retinopathy in color eye fundus images.

Interest
Topics
Details

Details

001
Publications

2020

Automatic lung nodule detection combined with gaze information improves radiologists' screening performance

Authors
Aresta, G; Ferreira, C; Pedrosa, J; Araujo, T; Rebelo, J; Negrao, E; Morgado, M; Alves, F; Cunha, A; Ramos, I; Campilho, A;

Publication
IEEE Journal of Biomedical and Health Informatics

Abstract

2020

DR|GRADUATE: uncertainty-aware deep learning-based diabetic retinopathy grading in eye fundus images

Authors
Araujo, T; Aresta, G; Mendonca, L; Penas, S; Maia, C; Carneiro, A; Maria Mendonca, AM; Campilho, A;

Publication
Medical Image Analysis

Abstract

2020

Optic Disc and Fovea Detection in Color Eye Fundus Images

Authors
Mendonça, AM; Melo, T; Araújo, T; Campilho, A;

Publication
Lecture Notes in Computer Science - Image Analysis and Recognition

Abstract

2020

Data Augmentation for Improving Proliferative Diabetic Retinopathy Detection in Eye Fundus Images

Authors
Araujo, T; Aresta, G; Mendonca, L; Penas, S; Maia, C; Carneiro, A; Mendonca, AM; Campilho, A;

Publication
IEEE Access

Abstract

2019

CATARACTS: Challenge on automatic tool annotation for cataRACT surgery

Authors
Al Hajj, H; Lamard, M; Conze, PH; Roychowdhury, S; Hu, XW; Marsalkaite, G; Zisimopoulos, O; Dedmari, MA; Zhao, FQ; Prellberg, J; Sahu, M; Galdran, A; Araujo, T; Vo, DM; Panda, C; Dahiya, N; Kondo, S; Bian, ZB; Vandat, A; Bialopetravicius, J; Flouty, E; Qiu, CH; Dill, S; Mukhopadhyay, A; Costa, P; Aresta, G; Ramamurthys, S; Lee, SW; Campilho, A; Zachow, S; Xia, SR; Conjeti, S; Stoyanov, D; Armaitis, J; Heng, PA; Macready, WG; Cochener, B; Quellec, G;

Publication
Medical Image Analysis

Abstract
Surgical tool detection is attracting increasing attention from the medical image analysis community. The goal generally is not to precisely locate tools in images, but rather to indicate which tools are being used by the surgeon at each instant. The main motivation for annotating tool usage is to design efficient solutions for surgical workflow analysis, with potential applications in report generation, surgical training and even real-time decision support. Most existing tool annotation algorithms focus on laparoscopic surgeries. However, with 19 million interventions per year, the most common surgical procedure in the world is cataract surgery. The CATARACTS challenge was organized in 2017 to evaluate tool annotation algorithms in the specific context of cataract surgery. It relies on more than nine hours of videos, from 50 cataract surgeries, in which the presence of 21 surgical tools was manually annotated by two experts. With 14 participating teams, this challenge can be considered a success. As might be expected, the submitted solutions are based on deep learning. This paper thoroughly evaluates these solutions: in particular, the quality of their annotations are compared to that of human interpretations. Next, lessons learnt from the differential analysis of these solutions are discussed. We expect that they will guide the design of efficient surgery monitoring tools in the near future. © 2018 Elsevier B.V.