Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Adrian Galdran

2018

UOLO - Automatic Object Detection and Segmentation in Biomedical Images

Autores
Araujo, T; Aresta, G; Galdran, A; Costa, P; Mendonca, AM; Campilho, A;

Publicação
DEEP LEARNING IN MEDICAL IMAGE ANALYSIS AND MULTIMODAL LEARNING FOR CLINICAL DECISION SUPPORT, DLMIA 2018

Abstract
We propose UOLO, a novel framework for the simultaneous detection and segmentation of structures of interest in medical images. UOLO consists of an object segmentation module which intermediate abstract representations are processed and used as input for object detection. The resulting system is optimized simultaneously for detecting a class of objects and segmenting an optionally different class of structures. UOLO is trained on a set of bounding boxes enclosing the objects to detect, as well as pixel-wise segmentation information, when available. A new loss function is devised, taking into account whether a reference segmentation is accessible for each training image, in order to suitably backpropagate the error. We validate UOLO on the task of simultaneous optic disc (OD) detection, fovea detection, and OD segmentation from retinal images, achieving state-of-the-art performance on public datasets.

2018

End-to-End Supervised Lung Lobe Segmentation

Autores
Ferreira, FT; Sousa, P; Galdran, A; Sousa, MR; Campilho, A;

Publicação
2018 International Joint Conference on Neural Networks, IJCNN 2018, Rio de Janeiro, Brazil, July 8-13, 2018

Abstract
The segmentation and characterization of the lung lobes are important tasks for Computer Aided Diagnosis (CAD) systems related to pulmonary disease. The detection of the fissures that divide the lung lobes is non-trivial when using classical methods that rely on anatomical information like the localization of the airways and vessels. This work presents a fully automatic and supervised approach to the problem of the segmentation of the five pulmonary lobes from a chest Computer Tomography (CT) scan using a Fully RegularizedV-Net (FRV- Net), a 3D Fully Convolutional Neural Network trained end-to- end. Our network was trained and tested in a custom dataset that we make publicly available. It can correctly separate the lobes even in cases when the fissure is not well delineated, achieving 0.93 in per-lobe Dice Coefficient and 0.85 in the inter-lobar Dice Coefficient in the test set. Both quantitative and qualitative results show that the proposed method can learn to produce correct lobe segmentations even when trained on a reduced dataset. © 2018 IEEE.

2018

Image dehazing by artificial multiple-exposure image fusion

Autores
Galdran, A;

Publicação
SIGNAL PROCESSING

Abstract
Bad weather conditions can reduce visibility on images acquired outdoors, decreasing their visual quality. The image processing task concerned with the mitigation of this effect is known as image dehazing. In this paper we present a new image dehazing technique that can remove the visual degradation due to haze without relying on the inversion of a physical model of haze formation, but respecting its main underlying assumptions. Hence, the proposed technique avoids the need of estimating depth in the scene, as well as costly depth map refinement processes. To achieve this goal, the original hazy image is first artificially under-exposed by means of a sequence of gamma-correction operations. The resulting set of multiply-exposed images is merged into a haze-free result through a multi-scale Laplacian blending scheme. A detailed experimental evaluation is presented in terms of both qualitative and quantitative analysis. The obtained results indicate that the fusion of artificially under-exposed images can effectively remove the effect of haze, even in challenging situations where other current image dehazing techniques fail to produce good-quality results. An implementation of the technique is open-sourced for reproducibility (https://github.com/agaldran/amef_dehazing).

2019

CATARACTS: Challenge on automatic tool annotation for cataRACT surgery

Autores
Al Hajj, H; Lamard, M; Conze, PH; Roychowdhury, S; Hu, XW; Marsalkaite, G; Zisimopoulos, O; Dedmari, MA; Zhao, FQ; Prellberg, J; Sahu, M; Galdran, A; Araujo, T; Vo, DM; Panda, C; Dahiya, N; Kondo, S; Bian, ZB; Vandat, A; Bialopetravicius, J; Flouty, E; Qiu, CH; Dill, S; Mukhopadhyay, A; Costa, P; Aresta, G; Ramamurthys, S; Lee, SW; Campilho, A; Zachow, S; Xia, SR; Conjeti, S; Stoyanov, D; Armaitis, J; Heng, PA; Macready, WG; Cochener, B; Quellec, G;

Publicação
MEDICAL IMAGE ANALYSIS

Abstract
Surgical tool detection is attracting increasing attention from the medical image analysis community. The goal generally is not to precisely locate tools in images, but rather to indicate which tools are being used by the surgeon at each instant. The main motivation for annotating tool usage is to design efficient solutions for surgical workflow analysis, with potential applications in report generation, surgical training and even real-time decision support. Most existing tool annotation algorithms focus on laparoscopic surgeries. However, with 19 million interventions per year, the most common surgical procedure in the world is cataract surgery. The CATARACTS challenge was organized in 2017 to evaluate tool annotation algorithms in the specific context of cataract surgery. It relies on more than nine hours of videos, from 50 cataract surgeries, in which the presence of 21 surgical tools was manually annotated by two experts. With 14 participating teams, this challenge can be considered a success. As might be expected, the submitted solutions are based on deep learning. This paper thoroughly evaluates these solutions: in particular, the quality of their annotations are compared to that of human interpretations. Next, lessons learnt from the differential analysis of these solutions are discussed. We expect that they will guide the design of efficient surgery monitoring tools in the near future.

2018

NTIRE 2018 Challenge on Image Dehazing: Methods and Results

Autores
Ancuti, C; Ancuti, CO; Timofte, R; Van Gool, L; Zhang, L; Yang, MH; Patel, VM; Zhang, H; Sindagi, VA; Zhao, RH; Ma, XP; Qin, Y; Jia, LM; Friedel, K; Ki, S; Sim, H; Choi, JS; Kim, SY; Seo, S; Kim, S; Kim, M; Mondal, R; Santra, S; Chanda, B; Liu, JL; Mei, KF; Li, JC; Luyao,; Fang, FM; Jiang, AW; Qu, XC; Liu, T; Wang, PF; Sun, B; Deng, JF; Zhao, YH; Hong, M; Huang, JY; Chen, YZ; Chen, ER; Yu, XL; Wu, TT; Genc, A; Engin, D; Ekenel, HK; Liu, WZ; Tong, T; Li, G; Gao, QQ; Li, Z; Tang, DF; Chen, YL; Huo, ZY; Alvarez Gila, A; Galdran, A; Bria, A; Vazquez Corral, J; Bertalmo, M; Demir, HS; Adil, OF; Phung, HX; Jin, X; Chen, JL; Shan, CW; Chen, ZB;

Publicação
PROCEEDINGS 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW)

Abstract
This paper reviews the first challenge on image dehazing (restoration of rich details in hazy image) with focus on proposed solutions and results. The challenge had 2 tracks. Track 1 employed the indoor images (using I-HAZE dataset), while Track 2 outdoor images (using O-HAZE dataset). The hazy images have been captured in presence of real haze, generated by professional haze machines. I-HAZE dataset contains 35 scenes that correspond to indoor domestic environments, with objects with different colors and specularities. O-HAZE contains 45 different outdoor scenes depicting the same visual content recorded in hazefree and hazy conditions, under the same illumination parameters. The dehazing process was learnable through provided pairs of haze-free and hazy train images. Each track had similar to 120 registered participants and 21 teams competed in the final testing phase. They gauge the state-of-the-art in image dehazing.

2018

NTIRE 2018 Challenge on Spectral Reconstruction from RGB Images

Autores
Arad, B; Ben Shahar, O; Timofte, R; Van Gool, L; Zhang, L; Yang, MH; Xiong, ZW; Chen, C; Shi, Z; Liu, D; Wu, F; Lanaras, C; Galliani, S; Schindler, K; Stiebel, T; Koppers, S; Seltsam, P; Zhou, RF; El Helou, M; Lahoud, F; Shahpaski, M; Zheng, K; Gao, LR; Zhang, B; Cui, XM; Yu, HY; Can, YB; Alvarez Gila, A; van de Weijer, J; Garrote, E; Galdran, A; Sharma, M; Koundinya, S; Upadhyay, A; Manekar, R; Mukhopadhyay, R; Sharma, H; Chaudhury, S; Nagasubramanian, K; Ghosal, S; Singh, AK; Singh, A; Ganapathysubramanian, B; Sarkar, S;

Publicação
PROCEEDINGS 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW)

Abstract
This paper reviews the first challenge on spectral image reconstruction from RGB images, i.e., the recovery of whole-scene hyperspectral (HS) information from a 3channel RGB image. The challenge was divided into 2 tracks: the "Clean" track sought HS recovery from noiseless RGB images obtained from a known response function (representing spectrally-calibrated camera) while the "Real World" track challenged participants to recover HS cubes from JPEG-compressed RGB images generated by an unknown response function. To facilitate the challenge, the BGU Hyperspectral Image Database [4] was extended to provide participants with 256 natural HS training images, and 5+ 10 additional images for validation and testing, respectively. The "Clean" and "Real World" tracks had 73 and 63 registered participants respectively, with 12 teams competing in the final testing phase. Proposed methods and their corresponding results are reported in this review.

  • 5
  • 8