Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por CTM

2020

Using autoencoders as a weight initialization method on deep neural networks for disease detection

Autores
Ferreira, MF; Camacho, R; Teixeira, LF;

Publicação
BMC MEDICAL INFORMATICS AND DECISION MAKING

Abstract
Background As of today, cancer is still one of the most prevalent and high-mortality diseases, summing more than 9 million deaths in 2018. This has motivated researchers to study the application of machine learning-based solutions for cancer detection to accelerate its diagnosis and help its prevention. Among several approaches, one is to automatically classify tumor samples through their gene expression analysis. Methods In this work, we aim to distinguish five different types of cancer through RNA-Seq datasets: thyroid, skin, stomach, breast, and lung. To do so, we have adopted a previously described methodology, with which we compare the performance of 3 different autoencoders (AEs) used as a deep neural network weight initialization technique. Our experiments consist in assessing two different approaches when training the classification model - fixing the weights after pre-training the AEs, or allowing fine-tuning of the entire network - and two different strategies for embedding the AEs into the classification network, namely by only importing the encoding layers, or by inserting the complete AE. We then study how varying the number of layers in the first strategy, the AEs latent vector dimension, and the imputation technique in the data preprocessing step impacts the network's overall classification performance. Finally, with the goal of assessing how well does this pipeline generalize, we apply the same methodology to two additional datasets that include features extracted from images of malaria thin blood smears, and breast masses cell nuclei. We also discard the possibility of overfitting by using held-out test sets in the images datasets. Results The methodology attained good overall results for both RNA-Seq and image extracted data. We outperformed the established baseline for all the considered datasets, achieving an average F(1)score of 99.03, 89.95, and 98.84 and an MCC of 0.99, 0.84, and 0.98, for the RNA-Seq (when detecting thyroid cancer), the Malaria, and the Wisconsin Breast Cancer data, respectively. Conclusions We observed that the approach of fine-tuning the weights of the top layers imported from the AE reached higher results, for all the presented experiences, and all the considered datasets. We outperformed all the previous reported results when comparing to the established baselines.

2020

Deep Learning Models for Segmentation of Mobile-Acquired Dermatological Images

Autores
Andrade, C; Teixeira, LF; Vasconcelos, MJM; Rosado, L;

Publicação
Image Analysis and Recognition - 17th International Conference, ICIAR 2020, Póvoa de Varzim, Portugal, June 24-26, 2020, Proceedings, Part II

Abstract
With the ever-increasing occurrence of skin cancer, timely and accurate skin cancer detection has become clinically more imperative. A clinical mobile-based deep learning approach is a possible solution for this challenge. Nevertheless, there is a major impediment in the development of such a model: the scarce availability of labelled data acquired with mobile devices, namely macroscopic images. In this work, we present two experiments to assemble a robust deep learning model for macroscopic skin lesion segmentation and to capitalize on the sizable dermoscopic databases. In the first experiment two groups of deep learning models, U-Net based and DeepLab based, were created and tested exclusively in the available macroscopic images. In the second experiment, the possibility of transferring knowledge between the domains was tested. To accomplish this, the selected model was retrained in the dermoscopic images and, subsequently, fine-tuned with the macroscopic images. The best model implemented in the first experiment was a DeepLab based model with a MobileNetV2 as feature extractor with a width multiplier of 0.35 and optimized with the soft Dice loss. This model comprehended 0.4 million parameters and obtained a thresholded Jaccard coefficient of 72.97% and 78.51% in the Dermofit and SMARTSKINS databases, respectively. In the second experiment, with the usage of transfer learning, the performance of this model was significantly improved in the first database to 75.46% and slightly decreased to 78.04% in the second. © 2020, The Author(s).

2020

Efficient CIEDE2000-Based Color Similarity Decision for Computer Vision

Autores
Pereira, A; Carvalho, P; Coelho, G; Corte Real, L;

Publicação
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY

Abstract
Color and color differences are critical aspects in many image processing and computer vision applications. A paradigmatic example is object segmentation, where color distances can greatly influence the performance of the algorithms. Metrics for color difference have been proposed in the literature, including the definition of standards such as CIEDE2000, which quantifies the change in visual perception of two given colors. This standard has been recommended for industrial computer vision applications, but the benefits of its application have been impaired by the complexity of the formula. This paper proposes a new strategy that improves the usability of the CIEDE2000 metric when a maximum acceptable distance can be imposed. We argue that, for applications where a maximum value, above which colors are considered to be different, can be established, then it is possible to reduce the amount of calculations of the metric, by preemptively analyzing the color features. This methodology encompasses the benefits of the metric while overcoming its computational limitations, thus broadening the range of applications of CIEDE2000 in both the computer vision algorithms and computational resource requirements.

2020

Texture collinearity foreground segmentation for night videos

Autores
Martins, I; Carvalho, P; Corte Real, L; Luis Alba Castro, JL;

Publicação
COMPUTER VISION AND IMAGE UNDERSTANDING

Abstract
One of the most difficult scenarios for unsupervised segmentation of moving objects is found in nighttime videos where the main challenges are the poor illumination conditions resulting in low-visibility of objects, very strong lights, surface-reflected light, a great variance of light intensity, sudden illumination changes, hard shadows, camouflaged objects, and noise. This paper proposes a novel method, coined COLBMOG (COLlinearity Boosted MOG), devised specifically for the foreground segmentation in nighttime videos, that shows the ability to overcome some of the limitations of state-of-the-art methods and still perform well in daytime scenarios. It is a texture-based classification method, using local texture modeling, complemented by a color-based classification method. The local texture at the pixel neighborhood is modeled as an..-dimensional vector. For a given pixel, the classification is based on the collinearity between this feature in the input frame and the reference background frame. For this purpose, a multimodal temporal model of the collinearity between texture vectors of background pixels is maintained. COLBMOG was objectively evaluated using the ChangeDetection.net (CDnet) 2014, Night Videos category, benchmark. COLBMOG ranks first among all the unsupervised methods. A detailed analysis of the results revealed the superior performance of the proposed method compared to the best performing state-of-the-art methods in this category, particularly evident in the presence of the most complex situations where all the algorithms tend to fail.

2020

Identifying relationships between imaging phenotypes and lung cancer-related mutation status: EGFR and KRAS

Autores
Pinheiro, G; Pereira, T; Dias, C; Freitas, C; Hespanhol, V; Costa, JL; Cunha, A; Oliveira, HP;

Publicação
SCIENTIFIC REPORTS

Abstract
EGFR and KRAS are the most frequently mutated genes in lung cancer, being active research topics in targeted therapy. The biopsy is the traditional method to genetically characterise a tumour. However, it is a risky procedure, painful for the patient, and, occasionally, the tumour might be inaccessible. This work aims to study and debate the nature of the relationships between imaging phenotypes and lung cancer-related mutation status. Until now, the literature has failed to point to new research directions, mainly consisting of results-oriented works in a field where there is still not enough available data to train clinically viable models. We intend to open a discussion about critical points and to present new possibilities for future radiogenomics studies. We conducted high-dimensional data visualisation and developed classifiers, which allowed us to analyse the results for EGFR and KRAS biological markers according to different combinations of input features. We show that EGFR mutation status might be correlated to CT scans imaging phenotypes; however, the same does not seem to hold for KRAS mutation status. Also, the experiments suggest that the best way to approach this problem is by combining nodule-related features with features from other lung structures.

2020

Estimation of Sulfonamides Concentration in Water Based on Digital Colourimetry

Autores
Carvalho, PH; Bessa, S; Silva, ARM; Peixoto, PS; Segundo, MA; Oliveira, HP;

Publicação
PATTERN RECOGNITION AND IMAGE ANALYSIS, PT I

Abstract
Overuse of antibiotics is causing the environment to become polluted with them. This is a major threat to global health, with bacteria developing resistance to antibiotics because of it. To monitor this threat, multiple antibiotic detection methods have been developed; however, they are normally complex and costly. In this work, an affordable, easy to use alternative based on digital colourimetry is proposed. Photographs of samples next to a colour reference target were acquired to build a dataset. The algorithm proposed detects the reference target, based on binarisation algorithms, in order to standardise the collected images using a colour correction matrix converting from RGB to XYZ, providing a necessary colour constancy between photographs from different devices. Afterwards, the sample is extracted through edge detection and Hough transform algorithms. Finally, the sulfonamide concentration is estimated resorting to an experimentally designed calibration curve, which correlates the concentration and colour information. Best performance was obtained using Hue colour, achieving a relative standard deviation value of less than 3.5%. © 2019, Springer Nature Switzerland AG.

  • 129
  • 371