Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Luís Filipe Teixeira

2020

Understanding the decisions of CNNs: An in-model approach

Authors
Rio Torto, I; Fernandes, K; Teixeira, LF;

Publication
PATTERN RECOGNITION LETTERS

Abstract
With the outstanding predictive performance of Convolutional Neural Networks on different tasks and their widespread use in real-world scenarios, it is essential to understand and trust these black-box models. While most of the literature focuses on post-model methods, we propose a novel in-model joint architecture, composed by an explainer and a classifier. This architecture outputs not only a class label, but also a visual explanation of such decision, without the need for additional labelled data to train the explainer besides the image class. The model is trained end-to-end, with the classifier taking as input an image and the explainer's resulting explanation, thus allowing for the classifier to focus on the relevant areas of such explanation. Moreover, this approach can be employed with any classifier, provided that the necessary connections to the explainer are made. We also propose a three-phase training process and two alternative custom loss functions that regularise the produced explanations and encourage desired properties, such as sparsity and spatial contiguity. The architecture was validated in two datasets (a subset of ImageNet and a cervical cancer dataset) and the obtained results show that it is able to produce meaningful image- and class-dependent visual explanations, without direct supervision, aligned with intuitive visual features associated with the data. Quantitative assessment of explanation quality was conducted through iterative perturbation of the input image according to the explanation heatmaps. The impact on classification performance is studied in terms of average function value and AOPC (Area Over the MoRF (Most Relevant First) Curve). For further evaluation, we propose POMPOM (Percentage of Meaningful Pixels Outside the Mask) as another measurable criteria of explanation goodness. These analyses showed that the proposed method outperformed state-of-the-art post-model methods, such as LRP (Layer-wise Relevance Propagation).

2020

Using autoencoders as a weight initialization method on deep neural networks for disease detection

Authors
Ferreira, MF; Camacho, R; Teixeira, LF;

Publication
BMC MEDICAL INFORMATICS AND DECISION MAKING

Abstract
Background As of today, cancer is still one of the most prevalent and high-mortality diseases, summing more than 9 million deaths in 2018. This has motivated researchers to study the application of machine learning-based solutions for cancer detection to accelerate its diagnosis and help its prevention. Among several approaches, one is to automatically classify tumor samples through their gene expression analysis. Methods In this work, we aim to distinguish five different types of cancer through RNA-Seq datasets: thyroid, skin, stomach, breast, and lung. To do so, we have adopted a previously described methodology, with which we compare the performance of 3 different autoencoders (AEs) used as a deep neural network weight initialization technique. Our experiments consist in assessing two different approaches when training the classification model - fixing the weights after pre-training the AEs, or allowing fine-tuning of the entire network - and two different strategies for embedding the AEs into the classification network, namely by only importing the encoding layers, or by inserting the complete AE. We then study how varying the number of layers in the first strategy, the AEs latent vector dimension, and the imputation technique in the data preprocessing step impacts the network's overall classification performance. Finally, with the goal of assessing how well does this pipeline generalize, we apply the same methodology to two additional datasets that include features extracted from images of malaria thin blood smears, and breast masses cell nuclei. We also discard the possibility of overfitting by using held-out test sets in the images datasets. Results The methodology attained good overall results for both RNA-Seq and image extracted data. We outperformed the established baseline for all the considered datasets, achieving an average F(1)score of 99.03, 89.95, and 98.84 and an MCC of 0.99, 0.84, and 0.98, for the RNA-Seq (when detecting thyroid cancer), the Malaria, and the Wisconsin Breast Cancer data, respectively. Conclusions We observed that the approach of fine-tuning the weights of the top layers imported from the AE reached higher results, for all the presented experiences, and all the considered datasets. We outperformed all the previous reported results when comparing to the established baselines.

2020

Deep Learning Models for Segmentation of Mobile-Acquired Dermatological Images

Authors
Andrade, C; Teixeira, LF; Vasconcelos, MJM; Rosado, L;

Publication
Image Analysis and Recognition - 17th International Conference, ICIAR 2020, Póvoa de Varzim, Portugal, June 24-26, 2020, Proceedings, Part II

Abstract
With the ever-increasing occurrence of skin cancer, timely and accurate skin cancer detection has become clinically more imperative. A clinical mobile-based deep learning approach is a possible solution for this challenge. Nevertheless, there is a major impediment in the development of such a model: the scarce availability of labelled data acquired with mobile devices, namely macroscopic images. In this work, we present two experiments to assemble a robust deep learning model for macroscopic skin lesion segmentation and to capitalize on the sizable dermoscopic databases. In the first experiment two groups of deep learning models, U-Net based and DeepLab based, were created and tested exclusively in the available macroscopic images. In the second experiment, the possibility of transferring knowledge between the domains was tested. To accomplish this, the selected model was retrained in the dermoscopic images and, subsequently, fine-tuned with the macroscopic images. The best model implemented in the first experiment was a DeepLab based model with a MobileNetV2 as feature extractor with a width multiplier of 0.35 and optimized with the soft Dice loss. This model comprehended 0.4 million parameters and obtained a thresholded Jaccard coefficient of 72.97% and 78.51% in the Dermofit and SMARTSKINS databases, respectively. In the second experiment, with the usage of transfer learning, the performance of this model was significantly improved in the first database to 75.46% and slightly decreased to 78.04% in the second. © 2020, The Author(s).

2021

Data Augmentation Using Adversarial Image-to-Image Translation for the Segmentation of Mobile-Acquired Dermatological Images

Authors
Andrade, C; Teixeira, LF; Vasconcelos, MJM; Rosado, L;

Publication
JOURNAL OF IMAGING

Abstract
Dermoscopic images allow the detailed examination of subsurface characteristics of the skin, which led to creating several substantial databases of diverse skin lesions. However, the dermoscope is not an easily accessible tool in some regions. A less expensive alternative could be acquiring medium resolution clinical macroscopic images of skin lesions. However, the limited volume of macroscopic images available, especially mobile-acquired, hinders developing a clinical mobile-based deep learning approach. In this work, we present a technique to efficiently utilize the sizable number of dermoscopic images to improve the segmentation capacity of macroscopic skin lesion images. A Cycle-Consistent Adversarial Network is used to translate the image between the two distinct domains created by the different image acquisition devices. A visual inspection was performed on several databases for qualitative evaluation of the results, based on the disappearance and appearance of intrinsic dermoscopic and macroscopic features. Moreover, the Frechet Inception Distance was used as a quantitative metric. The quantitative segmentation results are demonstrated on the available macroscopic segmentation databases, SMARTSKINS and Dermofit Image Library, yielding test set thresholded Jaccard Index of 85.13% and 74.30%. These results establish a new state-of-the-art performance in the SMARTSKINS database.

2021

Adversarial Data Augmentation on Breast MRI Segmentation

Authors
Teixeira, JF; Dias, M; Batista, E; Costa, J; Teixeira, LF; Oliveira, HP;

Publication
APPLIED SCIENCES-BASEL

Abstract
The scarcity of balanced and annotated datasets has been a recurring problem in medical image analysis. Several researchers have tried to fill this gap employing dataset synthesis with adversarial networks (GANs). Breast magnetic resonance imaging (MRI) provides complex, texture-rich medical images, with the same annotation shortage issues, for which, to the best of our knowledge, no previous work tried synthesizing data. Within this context, our work addresses the problem of synthesizing breast MRI images from corresponding annotations and evaluate the impact of this data augmentation strategy on a semantic segmentation task. We explored variations of image-to-image translation using conditional GANs, namely fitting the generator's architecture with residual blocks and experimenting with cycle consistency approaches. We studied the impact of these changes on visual verisimilarity and how an U-Net segmentation model is affected by the usage of synthetic data. We achieved sufficiently realistic-looking breast MRI images and maintained a stable segmentation score even when completely replacing the dataset with the synthetic set. Our results were promising, especially when concerning to Pix2PixHD and Residual CycleGAN architectures.

2021

Automatic quality inspection in the automotive industry: a hierarchical approach using simulated data

Authors
Rio-Torto, I; Campanico, AT; Pereira, A; Teixeira, LF; Filipe, V;

Publication
2021 IEEE 8th International Conference on Industrial Engineering and Applications (ICIEA)

Abstract

  • 4
  • 11