2019
Autores
Smailagic, A; Sharan, A; Costa, P; Galdran, A; Gaudio, A; Campilho, A;
Publicação
IMAGE ANALYSIS AND RECOGNITION (ICIAR 2019), PT II
Abstract
Diabetic Retinopathy is the leading cause of blindness in the working-age population of the world. The main aim of this paper is to improve the accuracy of Diabetic Retinopathy detection by implementing a shadow removal and color correction step as a preprocessing stage from eye fundus images. For this, we rely on recent findings indicating that application of image dehazing on the inverted intensity domain amounts to illumination compensation. Inspired by this work, we propose a Shadow Removal Layer that allows us to learn the preprocessing function for a particular task. We show that learning the pre-processing function improves the performance of the network on the Diabetic Retinopathy detection task.
2020
Autores
Smailagic, A; Costa, P; Gaudio, A; Khandelwal, K; Mirshekari, M; Fagert, J; Walawalkar, D; Xu, SS; Galdran, A; Zhang, P; Campilho, A; Noh, HY;
Publicação
WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY
Abstract
Active learning (AL) methods create an optimized labeled training set from unlabeled data. We introduce a novel online active deep learning method for medical image analysis. We extend our MedAL AL framework to present new results in this paper. A novel sampling method queries the unlabeled examples that maximize the average distance to all training set examples. Our online method enhances performance of its underlying baseline deep network. These novelties contribute to significant performance improvements, including improving the model's underlying deep network accuracy by 6.30%, using only 25% of the labeled dataset to achieve baseline accuracy, reducing backpropagated images during training by as much as 67%, and demonstrating robustness to class imbalance in binary and multiclass tasks. This article is categorized under: Technologies > Machine Learning Technologies > Classification Application Areas > Health Care
2020
Autores
Vazquez Corral, J; Galdran, A; Cyriac, P; Bertalmio, M;
Publicação
JOURNAL OF REAL-TIME IMAGE PROCESSING
Abstract
We propose a method for color dehazing with four main characteristics: it does not introduce color artifacts, it does not depend on inverting any physical equation, it is based on models of visual perception, and it is fast, potentially real time. Our method converts the original input image to the HSV color space and works in the saturation and value domains by: (1) reducing the value component via a global constrained histogram flattening; (2) modifying the saturation component in consistency with the previous reduced value; and (3) performing a local contrast enhancement in the value component. Results show that our method competes with the state-of-the-art when dealing with standard hazy images, and outperforms it when dealing with challenging haze cases. Furthermore, our method is able to dehaze a FullHD image on a GPU in 90 ms.
2021
Autores
Pedrosa, J; Aresta, G; Ferreira, C; Atwal, G; Phoulady, HA; Chen, XY; Chen, RZ; Li, JL; Wang, LS; Galdran, A; Bouchachia, H; Kaluva, KC; Vaidhya, K; Chunduru, A; Tarai, S; Nadimpalli, SPP; Vaidya, S; Kim, I; Rassadin, A; Tian, ZH; Sun, ZW; Jia, YZ; Men, XJ; Ramos, I; Cunha, A; Campilho, A;
Publicação
MEDICAL IMAGE ANALYSIS
Abstract
Lung cancer is the deadliest type of cancer worldwide and late detection is the major factor for the low survival rate of patients. Low dose computed tomography has been suggested as a potential screening tool but manual screening is costly and time-consuming. This has fuelled the development of automatic methods for the detection, segmentation and characterisation of pulmonary nodules. In spite of promising results, the application of automatic methods to clinical routine is not straightforward and only a limited number of studies have addressed the problem in a holistic way. With the goal of advancing the state of the art, the Lung Nodule Database (LNDb) Challenge on automatic lung cancer patient management was organized. The LNDb Challenge addressed lung nodule detection, segmentation and characterization as well as prediction of patient follow-up according to the 2017 Fleischner society pulmonary nodule guidelines. 294 CT scans were thus collected retrospectively at the Centro Hospitalar e Universitrio de So Joo in Porto, Portugal and each CT was annotated by at least one radiologist. Annotations comprised nodule centroids, segmentations and subjective characterization. 58 CTs and the corresponding annotations were withheld as a separate test set. A total of 947 users registered for the challenge and 11 successful submissions for at least one of the sub-challenges were received. For patient follow-up prediction, a maximum quadratic weighted Cohen's kappa of 0.580 was obtained. In terms of nodule detection, a sensitivity below 0.4 (and 0.7) at 1 false positive per scan was obtained for nodules identified by at least one (and two) radiologist(s). For nodule segmentation, a maximum Jaccard score of 0.567 was obtained, surpassing the interobserver variability. In terms of nodule texture characterization, a maximum quadratic weighted Cohen's kappa of 0.733 was obtained, with part solid nodules being particularly challenging to classify correctly. Detailed analysis of the proposed methods and the differences in performance allow to identify the major challenges remaining and future directions data collection, augmentation/generation and evaluation of under-represented classes, the incorporation of scan-level information for better decision making and the development of tools and challenges with clinical-oriented goals. The LNDb Challenge and associated data remain publicly available so that future methods can be tested and benchmarked, promoting the development of new algorithms in lung cancer medical image analysis and patient followup recommendation.
2018
Autores
Al Rawi, M; Sebastien, T; Isasi, A; Galdran, A; Rodriguez, J; Elmgren, F; Bastos, J; Pinto, M;
Publicação
PROCEEDINGS OF THE ASME 37TH INTERNATIONAL CONFERENCE ON OCEAN, OFFSHORE AND ARCTIC ENGINEERING, 2018, VOL 7A
Abstract
Matching two regions represented in bathymetric data that have some form of geographical overlapping is an important and challenging aspect in underwater mapping. It is important because of the possible error in estimating the geographical location of each point underwater. It is challenging due to the size of the acquired bathymetric data points. The matching could also play a vital role in the registration of underwater images and/or maps fusion, if both bathymetric and intensity scans are considered. Compared to the exhaustive search that requires polynomial time, O(n(2)), an efficient bathymetric matching algorithm is proposed in this work that finds several match points in linear time, requiring thus O(n) computations. The paper thus presents a new algorithm that allows to compile the bathymetric data of the common areas of two submarine areas that have been sampled in underwater missions.
2019
Autores
Burlina, P; Galdran, A; Costa, P; Cohen, A; Campilho, A;
Publicação
Computational Retinal Image Analysis
Abstract
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.