Cookies Policy
We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out More
Close
  • Menu
About

About

I received my degree in Mathematics at the University of Valencia in 2008. For the next course I was awarded with a grant from the  Fundación La Caixa" to do the M.S. "Mathematics Investigation", at the University of Valencia, together with the Polytechnical  University of Valencia. I developed the master project in the field of Computer Aided Design, in the topic of Pythagorean Hodograph Curves, under the supervision of Juan Monterde.
On October 2009, I joined the PDE line at the Basque Center for Applied Mathematics, to work mainly on the numerical treatment of PDEs. On October 2010, I obtained a fellowship from the "Fundación de Centros Tecnológicos - Iñaki Goenaga" (FCT-IG) to develop a PhD on Mathematical Image Processing, at the technological center Tecnalia Research and Innovation, under the supervision of David Pardo, from the UPV-EHU, together with Artzai Picón, from Tecnalia. On October 2010 I finished with honors the M.S. in "Mathematical Modelization, Statistics and Computation" at the UPV-EHU. On December 2015 I finally deffended my PhD Thesis on Image Restoration under Attenuating Media. From then to September 2016 I worked as a senior researcher in Tecnalia, and starting from September 2016, I am a Post-Doctoral fellow at INESC-TEC Porto, within the C-BER group under the supervision of Professor Aurélio Campilho.

Interest
Topics
Details

Details

001
Publications

2019

CATARACTS: Challenge on automatic tool annotation for cataRACT surgery

Authors
Al Hajj, H; Lamard, M; Conze, PH; Roychowdhury, S; Hu, XW; Marsalkaite, G; Zisimopoulos, O; Dedmari, MA; Zhao, FQ; Prellberg, J; Sahu, M; Galdran, A; Araujo, T; Vo, DM; Panda, C; Dahiya, N; Kondo, S; Bian, ZB; Vandat, A; Bialopetravicius, J; Flouty, E; Qiu, CH; Dill, S; Mukhopadhyay, A; Costa, P; Aresta, G; Ramamurthys, S; Lee, SW; Campilho, A; Zachow, S; Xia, SR; Conjeti, S; Stoyanov, D; Armaitis, J; Heng, PA; Macready, WG; Cochener, B; Quellec, G;

Publication
Medical Image Analysis

Abstract
Surgical tool detection is attracting increasing attention from the medical image analysis community. The goal generally is not to precisely locate tools in images, but rather to indicate which tools are being used by the surgeon at each instant. The main motivation for annotating tool usage is to design efficient solutions for surgical workflow analysis, with potential applications in report generation, surgical training and even real-time decision support. Most existing tool annotation algorithms focus on laparoscopic surgeries. However, with 19 million interventions per year, the most common surgical procedure in the world is cataract surgery. The CATARACTS challenge was organized in 2017 to evaluate tool annotation algorithms in the specific context of cataract surgery. It relies on more than nine hours of videos, from 50 cataract surgeries, in which the presence of 21 surgical tools was manually annotated by two experts. With 14 participating teams, this challenge can be considered a success. As might be expected, the submitted solutions are based on deep learning. This paper thoroughly evaluates these solutions: in particular, the quality of their annotations are compared to that of human interpretations. Next, lessons learnt from the differential analysis of these solutions are discussed. We expect that they will guide the design of efficient surgery monitoring tools in the near future. © 2018 Elsevier B.V.

2018

Retinal image quality assessment by mean-subtracted contrast-normalized coefficients

Authors
Galdran, A; Araujo, T; Mendonca, AM; Campilho, A;

Publication
Lecture Notes in Computational Vision and Biomechanics

Abstract
The automatic assessment of visual quality on images of the eye fundus is an important task in retinal image analysis. A novel quality assessment technique is proposed in this paper. We propose to compute Mean-Subtracted Contrast-Normalized (MSCN) coefficients on local spatial neighborhoods of a given image and analyze their distribution. It is known that for natural images, such distribution behaves normally, while distortions of different kinds perturb this regularity. The combination of MSCN coefficients with a simple measure of local contrast allows us to design a simple but effective retinal image quality assessment algorithm that successfully discriminates between good and low-quality images, while delivering a meaningful quality score. The proposed technique is validated on a recent database of quality-labeled retinal images, obtaining results aligned with state-of-the-art approaches at a low computational cost. © 2018, Springer International Publishing AG.

2018

End-to-end Adversarial Retinal Image Synthesis

Authors
Costa, P; Galdran, A; Meyer, MI; Niemeijer, M; Abramoff, M; Mendonca, AM; Campilho, A;

Publication
IEEE Transactions on Medical Imaging

Abstract
In medical image analysis applications, the availability of large amounts of annotated data is becoming increasingly critical. However, annotated medical data is often scarce and costly to obtain. In this paper, we address the problem of synthesizing retinal color images by applying recent techniques based on adversarial learning. In this setting, a generative model is trained to maximize a loss function provided by a second model attempting to classify its output into real or synthetic. In particular, we propose to implement an adversarial autoencoder for the task of retinal vessel network synthesis. We use the generated vessel trees as an intermediate stage for the generation of color retinal images, which is accomplished with a Generative Adversarial Network. Both models require the optimization of almost everywhere differentiable loss functions, which allows us to train them jointly. The resulting model offers an end-to-end retinal image synthesis system capable of generating as many retinal images as the user requires, with their corresponding vessel networks, by sampling from a simple probability distribution that we impose to the associated latent space. We show that the learned latent space contains a well-defined semantic structure, implying that we can perform calculations in the space of retinal images, e.g., smoothly interpolating new data points between two retinal images. Visual and quantitative results demonstrate that the synthesized images are substantially different from those in the training set, while being also anatomically consistent and displaying a reasonable visual quality. IEEE

2018

A Weakly-Supervised Framework for Interpretable Diabetic Retinopathy Detection on Retinal Images

Authors
Costa, P; Galdran, A; Smailagic, A; Campilho, A;

Publication
IEEE ACCESS

Abstract
Diabetic retinopathy (DR) detection is a critical retinal image analysis task in the context of early blindness prevention. Unfortunately, in order to train a model to accurately detect DR based on the presence of different retinal lesions, typically a dataset with medical expert's annotations at the pixel level is needed. In this paper, a new methodology based on the multiple instance learning (MIL) framework is developed in order to overcome this necessity by leveraging the implicit information present on annotations made at the image level. Contrary to previous MIL-based DR detection systems, the main contribution of the proposed technique is the joint optimization of the instance encoding and the image classification stages. In this way, more useful mid-level representations of pathological images can be obtained. The explainability of the model decisions is further enhanced by means of a new loss function enforcing appropriate instance and mid-level representations. The proposed technique achieves comparable or better results than other recently proposed methods, with 90% area under the receiver operating characteristic curve (AUC) on Messidor, 93% AUC on DR1, and 96% AUC on DR2, while improving the interpretability of the produced decisions.

2018

Deep Convolutional Artery/Vein Classification of Retinal Vessels

Authors
Meyer, MI; Galdran, A; Costa, P; Mendonça, AM; Campilho, A;

Publication
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Abstract
The classification of retinal vessels into arteries and veins in eye fundus images is a relevant task for the automatic assessment of vascular changes. This paper presents a new approach to solve this problem by means of a Fully-Connected Convolutional Neural Network that is specifically adapted for artery/vein classification. For this, a loss function that focuses only on pixels belonging to the retinal vessel tree is built. The relevance of providing the model with different chromatic components of the source images is also analyzed. The performance of the proposed method is evaluated on the RITE dataset of retinal images, achieving promising results, with an accuracy of 96 % on large caliber vessels, and an overall accuracy of 84 %. © 2018, Springer International Publishing AG, part of Springer Nature.