Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Mafalda Falcão Ferreira
  • Cluster

    Informática
  • Cargo

    Assistente de Investigação
  • Desde

    01 maio 2018
001
Publicações

2020

Teaching cross-cultural design thinking for healthcare

Autores
Ferreira, MF; Savoy, JN; Markey, MK;

Publicação
BREAST

Abstract

2020

Extracting architectural patterns of deep neural networks for disease detection

Autores
Ferreira, MF;

Publicação
PROCEEDINGS OF THE 35TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING (SAC'20)

Abstract

2020

Using autoencoders as a weight initialization method on deep neural networks for disease detection

Autores
Ferreira, MF; Camacho, R; Teixeira, LF;

Publicação
BMC MEDICAL INFORMATICS AND DECISION MAKING

Abstract

2018

Autoencoders as Weight Initialization of Deep Classification Networks Applied to Papillary Thyroid Carcinoma

Autores
Ferreira, MF; Camacho, R; Teixeira, LF;

Publicação
PROCEEDINGS 2018 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM)

Abstract
Cancer is one of the most serious health problems of our time. One approach for automatically classifying tumor samples is to analyze derived molecular information. Previous work by Teixeira et al. compared different methods of Data Oversampling and Feature Reduction, as well as Deep (Stacked) Denoising Autoencoders followed by a shallow layer for classification. In this work, we compare the performance of 6 different types of Autoencoder (AE), combined with two different approaches when training the classification model: (a) fixing the weights, after pretraining an AE, and (b) allowing fine-tuning of the entire network. We also apply two different strategies for embedding the AE into the classification network: (1) by only importing the encoding layers, and (2) by importing the complete AE. Our best result was the combination of unsupervised feature learning through a single-layer Denoising AE, followed by its complete import into the classification network, and subsequent fine-tuning through supervised training, achieving an F1 score of 99.61% +/- 0.54. We conclude that a reconstruction of the input space, combined with a deeper classification network outperforms previous work, without resorting to data augmentation techniques.