2019
Authors
Rocha, J; Cunha, A; Mendonça, AM;
Publication
PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2019, PT I
Abstract
Lung cancer is among the deadliest diseases in the world. The detection and characterization of pulmonary nodules are crucial for an accurate diagnosis, which is of vital importance to increase the patients’ survival rates. The segmentation process contributes to the mentioned characterization, but faces several challenges, due to the diversity in nodular shape, size, and texture, as well as the presence of adjacent structures. This paper proposes two methods for pulmonary nodule segmentation in Computed Tomography (CT) scans. First, a conventional approach which applies the Sliding Band Filter (SBF) to estimate the center of the nodule, and consequently the filter’s support points, matching the initial border coordinates. This preliminary segmentation is then refined to include mainly the nodular area, and no other regions (e.g. vessels and pleural wall). The second approach is based on Deep Learning, using the U-Net to achieve the same goal. This work compares both performances, and consequently identifies which one is the most promising tool to promote early lung cancer screening and improve nodule characterization. Both methodologies used 2653 nodules from the LIDC database: the SBF based one achieved a Dice score of 0.663, while the U-Net achieved 0.830, yielding more similar results to the ground truth reference annotated by specialists, and thus being a more reliable approach. © Springer Nature Switzerland AG 2019.
2019
Authors
Valerio, MT; Gomes, S; Salgado, M; Oliveira, HP; Cunha, A;
Publication
CENTERIS2019--INTERNATIONAL CONFERENCE ON ENTERPRISE INFORMATION SYSTEMS/PROJMAN2019--INTERNATIONAL CONFERENCE ON PROJECT MANAGEMENT/HCIST2019--INTERNATIONAL CONFERENCE ON HEALTH AND SOCIAL CARE INFORMATION SYSTEMS AND TECHNOLOGIES
Abstract
Wireless capsule endoscopy is a relatively novel technique used for imaging of the gastrointestinal tract. Unlike traditional approaches, it allows painless visualisation of the whole of the gastrointestinal tract, including the small bowel, a region of difficult access. Endoscopic capsules record for about 8h, producing around 60,000 images. These are analysed by an expert that identifies abnormalities present in the frames, a process that is very tedious and prone to errors. Thus there is a clear need to develop systems that automatically analyse this data and detect lesions. Lesion detection achieved a precision of 0.94 and a recall of 0.93 by fmetuning the pre-trained DenseNet-161 model. (C) 2019 The Authors. Published by Elsevier B.V.
2019
Authors
Costa, P; Araujo, T; Aresta, G; Galdran, A; Mendonca, AM; Smailagic, A; Campilho, A;
Publication
PROCEEDINGS OF MVA 2019 16TH INTERNATIONAL CONFERENCE ON MACHINE VISION APPLICATIONS (MVA)
Abstract
Diabetic Retinopathy (DR) is one of the leading causes of preventable blindness in the developed world. With the increasing number of diabetic patients there is a growing need of an automated system for DR detection. We propose EyeWeS, a method that not only detects DR in eye fundus images but also pinpoints the regions of the image that contain lesions, while being trained with image labels only. We show that it is possible to convert any pre-trained convolutional neural network into a weakly-supervised model while increasing their performance and efficiency. EyeWeS improved the results of Inception V3 from 94:9% Area Under the Receiver Operating Curve (AUC) to 95:8% AUC while maintaining only approximately 5% of the Inception V3's number of parameters. The same model is able to achieve 97:1% AUC in a cross-dataset experiment.
2019
Authors
Pereira, T; Ding, C; Gadhoumi, K; Tran, N; Colorado, RA; Meisel, K; Hu, X;
Publication
PHYSIOLOGICAL MEASUREMENT
Abstract
2019
Authors
Karácsony, T; Hansen, JP; Iversen, HK; Puthusserypady, S;
Publication
ACM International Conference Proceeding Series
Abstract
Though Motor Imagery (MI) stroke rehabilitation effectively promotes neural reorganization, current therapeutic methods are immeasurable and their repetitiveness can be demotivating. In this work, a real-time electroencephalogram (EEG) based MI-BCI (Brain Computer Interface) system with a virtual reality (VR) game as a motivational feedback has been developed for stroke rehabilitation. If the subject successfully hits one of the targets, it explodes and thus providing feedback on a successfully imagined and virtually executed movement of hands or feet. Novel classification algorithms with deep learning (DL) and convolutional neural network (CNN) architecture with a unique trial onset detection technique was used. Our classifiers performed better than the previous architectures on datasets from PhysioNet offline database. It provided fine classification in the real-time game setting using a 0.5 second 16 channel input for the CNN architectures. Ten participants reported the training to be interesting, fun and immersive. "It is a bit weird, because it feels like it would be my hands", was one of the comments from a test person. The VR system induced a slight discomfort and a moderate effort for MI activations was reported. We conclude that MI-BCI-VR systems with classifiers based on DL for real-time game applications should be considered for motivating MI stroke rehabilitation. © 2019 Association for Computing Machinery.
2019
Authors
Galdran, A; Meyer, M; Costa, P; Mendonca,; Campilho, A;
Publication
2019 IEEE 16TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2019)
Abstract
The automatic differentiation of retinal vessels into arteries and veins (A/V) is a highly relevant task within the field of retinal image analysis. however, due to limitations of retinal image acquisition devices, specialists can find it impossible to label certain vessels in eye fundus images. In this paper, we introduce a method that takes into account such uncertainty by design. For this, we formulate the A/V classification task as a four-class segmentation problem, and a Convolutional Neural Network is trained to classify pixels into background, A/V, or uncertain classes. The resulting technique can directly provide pixelwise uncertainty estimates. In addition, instead of depending on a previously available vessel segmentation, the method automatically segments the vessel tree. Experimental results show a performance comparable or superior to several recent A/V classification approaches. In addition, the proposed technique also attains state-of-the-art performance when evaluated for the task of vessel segmentation, generalizing to data that, was not used during training, even with considerable differences in terms of appearance and resolution.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.