2017
Authors
Al Rawi, M; Galdran, A; Elmgren, F; Rodriguez, J; Bastos, J; Pinto, M;
Publication
2017 IEEE JORDAN CONFERENCE ON APPLIED ELECTRICAL ENGINEERING AND COMPUTING TECHNOLOGIES (AEECT)
Abstract
Sidescan sonars have seen wide deployment in underwater imaging. They can be used to image the seabed to a rather acceptable resolution from a few centimeters to 10 centimeters. Yet, sonar images are still of a substantially lower visual quality as they suffer from quite a few problems, e.g., acoustic shadows that vary according to vehicle heading and sonar grazing angle, speckle noise, geometric deformation due to ping variation and speed of vehicle carrying the sonar, etc. Landmark detection in sidescan sonar images is vital to find objects and locations of interest that are useful in various underwater operations. The objective of this work is proposing novel landmark detection methods for this class of images. Cubic smoothing spline fitted to the across-track signals is proposed as a method to detect the objects and their shadows. To cover a large area, experimental data has been acquired during missions performed in Melenara Bay (Las Palmas/Spain) using autonomous underwater vehicles (AUVs) equipped with Klein 3500 sidescan sonar. The AUVs have been deployed in two missions (one mission performed each day) and a total of 25 large-resolution images have been acquired. The AUV generated 12 parallel path images in the first mission and 13 parallel path images in the second mission with an angle of 70 degrees between the direction of mission #1 and mission #2. This difference in the directions of the two missions was necessary to ensure different acoustic shadows between the two sets of images, each set being generated from a different mission. Results show that the proposed methods are powerful in detecting landmarks from these challenging images.
2018
Authors
Costa, P; Galdran, A; Smailagic, A; Campilho, A;
Publication
IEEE ACCESS
Abstract
Diabetic retinopathy (DR) detection is a critical retinal image analysis task in the context of early blindness prevention. Unfortunately, in order to train a model to accurately detect DR based on the presence of different retinal lesions, typically a dataset with medical expert's annotations at the pixel level is needed. In this paper, a new methodology based on the multiple instance learning (MIL) framework is developed in order to overcome this necessity by leveraging the implicit information present on annotations made at the image level. Contrary to previous MIL-based DR detection systems, the main contribution of the proposed technique is the joint optimization of the instance encoding and the image classification stages. In this way, more useful mid-level representations of pathological images can be obtained. The explainability of the model decisions is further enhanced by means of a new loss function enforcing appropriate instance and mid-level representations. The proposed technique achieves comparable or better results than other recently proposed methods, with 90% area under the receiver operating characteristic curve (AUC) on Messidor, 93% AUC on DR1, and 96% AUC on DR2, while improving the interpretability of the produced decisions.
2018
Authors
Meyer, MI; Galdran, A; Costa, P; Mendonça, AM; Campilho, A;
Publication
Image Analysis and Recognition - 15th International Conference, ICIAR 2018, Póvoa de Varzim, Portugal, June 27-29, 2018, Proceedings
Abstract
The classification of retinal vessels into arteries and veins in eye fundus images is a relevant task for the automatic assessment of vascular changes. This paper presents a new approach to solve this problem by means of a Fully-Connected Convolutional Neural Network that is specifically adapted for artery/vein classification. For this, a loss function that focuses only on pixels belonging to the retinal vessel tree is built. The relevance of providing the model with different chromatic components of the source images is also analyzed. The performance of the proposed method is evaluated on the RITE dataset of retinal images, achieving promising results, with an accuracy of 96 % on large caliber vessels, and an overall accuracy of 84 %. © 2018, Springer International Publishing AG, part of Springer Nature.
2018
Authors
Galdran, A; Costa, P; Vazquez Corral, J; Campilho, A;
Publication
2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)
Abstract
Image dehazing tries to solve an undesired loss of visibility in outdoor images due to the presence of fog. Recently, machine-learning techniques have shown great dehazing ability. However, in order to be trained, they require training sets with pairs of foggy images and their clean counterparts, or a depth-map. In this paper, we propose to learn the appearance of fog from weakly-labeled data. Specifically, we only require a single label per-image stating if it contains fog or not. Based on the Multiple-Instance Learning framework, we propose a model that can learn from image-level labels to predict if an image contains haze reasoning at a local level. Fog detection performance of the proposed method compares favorably with two popular techniques, and the attention maps generated by the model demonstrate that it effectively learns to disregard sky regions as indicative of the presence of fog, a common pitfall of current image dehazing techniques.
2018
Authors
Meyer, MI; Galdran, A; Mendonca, AM; Campilho, A;
Publication
MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2018, PT II
Abstract
This paper introduces a novel strategy for the task of simultaneously locating two key anatomical landmarks in retinal images of the eye fundus, namely the optic disc and the fovea. For that, instead of attempting to classify each pixel as belonging to the background, the optic disc, or the fovea center, which would lead to a highly class-imbalanced setting, the problem is reformulated as a pixelwise regression task. The regressed quantity consists of the distance from the closest landmark of interest. A Fully-Convolutional Deep Neural Network is optimized to predict this distance for each image location, implicitly casting the problem into a per-pixel Multi-Task Learning approach by which a globally consistent distribution of distances across the entire image can be learned. Once trained, the two minimal distances predicted by the model are selected as the locations of the optic disc and the fovea. The joint learning of every pixel position relative to the optic disc and the fovea favors an automatic understanding of the overall anatomical distribution. This results in an effective technique that can detect both locations simultaneously, as opposed to previous methods that handle both tasks separately. Comprehensive experimental results on a large public dataset validate the proposed approach.
2018
Authors
Galdran, A; Costa, P; Bria, A; Araujo, T; Mendonca, AM; Campilho, A;
Publication
MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2018, PT I
Abstract
Due to inevitable differences between the data used for training modern CAD systems and the data encountered when they are deployed in clinical scenarios, the ability to automatically assess the quality of predictions when no expert annotation is available can be critical. In this paper, we propose a new method for quality assessment of retinal vessel tree segmentations in the absence of a reference ground-truth. For this, we artificially degrade expert-annotated vessel map segmentations and then train a CNN to predict the similarity between the degraded images and their corresponding ground-truths. This similarity can be interpreted as a proxy to the quality of a segmentation. The proposed model can produce a visually meaningful quality score, effectively predicting the quality of a vessel tree segmentation in the absence of a manually segmented reference. We further demonstrate the usefulness of our approach by applying it to automatically find a threshold for soft probabilistic segmentations on a per-image basis. For an independent state-of-the-art unsupervised vessel segmentation technique, the thresholds selected by our approach lead to statistically significant improvements in F1-score (+2.67%) and Matthews Correlation Coefficient (+3.11%) over the thresholds derived from ROC analysis on the training set. The score is also shown to correlate strongly with F1 and MCC when a reference is available.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.