Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por CTM

2018

Transfer learning approach for fall detection with the FARSEEING real-world dataset and simulated falls

Autores
Silva, J; Sousa, I; Cardoso, JS;

Publicação
40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2018, Honolulu, HI, USA, July 18-21, 2018

Abstract
Falls are very rare and extremely difficult to acquire in free living conditions. Due to this, most of prior work on fall detection has focused on simulated datasets acquired in scenarios that mimic the real-world context, however, the validation of systems trained with simulated falls remains unclear. This work presents a transfer learning approach for combining a dataset of simulated falls and non-falls, obtained from young volunteers, with the real-world FARSEEING dataset, in order to train a set of supervised classifiers for discriminating between falls and non-falls events. The objective is to analyze if a combination of simulated and real falls could enrich the model. In the real-world, falls are a sporadic event, which results in imbalanced datasets. In this work, several methods for imbalance learning were employed: SMOTE, Balance Cascade and Ranking models. The Balance Cascade obtained less misclassifications in the validation set.There was an improvement when mixing the real falls and simulated non-falls compared to the case when only simulated falls were used for training. When testing with a mixed set with real falls and simulated non-falls, it is even more important to train with a mixed set. Moreover, it was possible to onclude that a model trained with simulated falls generalize better when tested with real falls, than the opposite. The overall accuracy obtained for the combination of different datasets were above 95 %. © 2018 IEEE.

2018

Ordinal Image Segmentation using Deep Neural Networks

Autores
Fernandes, K; Cardoso, JS;

Publicação
2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)

Abstract
Ordinal arrangement of objects is a common property in biomedical images. Traditional methods to deal with semantic image segmentation in this setting are ad-hoc and application specific. In this paper, we propose ordinal-aware deep learning architectures for image segmentation that enforce pixelwise consistency by construction. We validated the proposed architectures on several real-life biomedical datasets and achieved competitive results in all cases. © 2018 IEEE.

2018

Robust Clustering-based Segmentation Methods for Fingerprint Recognition

Autores
Ferreira, PM; Sequeira, AF; Cardoso, JS; Rebelo, A;

Publicação
2018 INTERNATIONAL CONFERENCE OF THE BIOMETRICS SPECIAL INTEREST GROUP (BIOSIG)

Abstract
Fingerprint recognition has been widely studied for more than 45 years and yet it remains an intriguing pattern recognition problem. This paper focuses on the foreground mask estimation which is crucial for the accuracy of a fingerprint recognition system. The method consists of a robust cluster-based fingerprint segmentation framework incorporating an additional step to deal with pixels that were rejected as foreground in a decision considered not reliable enough. These rejected pixels are then further analysed for a more accurate classification. The procedure falls in the paradigm of classification with reject option- a viable option in several real world applications of machine learning and pattern recognition, where the cost of misclassifying observations is high. The present work expands a previous method based on the fuzzy C-means clustering with two variations regarding: i) the filters used; and ii) the clustering method for pixel classification as foreground/background. Experimental results demonstrate improved results on FVC datasets comparing with state-of-the-art methods even including methodologies based on deep learning architectures. © 2018 Gesellschaft fuer Informatik.

2018

Dimensional emotion recognition using visual and textual cues

Autores
Ferreira, PM; Pernes, D; Fernandes, K; Rebelo, A; Cardoso, JS;

Publicação
CoRR

Abstract

2018

Human-Robot Interaction Based on Gestures for Service Robots

Autores
de Sousa, P; Esteves, T; Campos, D; Duarte, F; Santos, J; Leao, J; Xavier, J; de Matos, L; Camarneiro, M; Penas, M; Miranda, M; Silva, R; Neves, AJR; Teixeira, L;

Publicação
VIPIMAGE 2017

Abstract
Gesture recognition is very important for Human-Robot Interfaces. In this paper, we present a novel depth based method for gesture recognition to improve the interaction of a service robot autonomous shopping cart, mostly used by reduced mobility people. In the proposed solution, the identification of the user is already implemented by the software present on the robot where a bounding box focusing on the user is extracted. Based on the analysis of the depth histogram, the distance from the user to the robot is calculated and the user is segmented using from the background. Then, a region growing algorithm is applied to delete all other objects in the image. We apply again a threshold technique to the original image, to obtain all the objects in front of the user. Intercepting the threshold based segmentation result with the region growing resulting image, we obtain candidate objects to be arms of the user. By applying a labelling algorithm to obtain each object individually, a Principal Component Analysis is computed to each one to obtain its center and orientation. Using that information, we intercept the silhouette of the arm with a line obtaining the upper point of the interception which indicates the hand position. A Kalman filter is then applied to track the hand and based on state machines to describe gestures (Start, Stop, Pause) we perform gesture recognition. We tested the proposed approach in a real case scenario with different users and we obtained an accuracy around 89,7%.

2018

Autoencoders as Weight Initialization of Deep Classification Networks Applied to Papillary Thyroid Carcinoma

Autores
Ferreira, MF; Camacho, R; Teixeira, LF;

Publicação
PROCEEDINGS 2018 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM)

Abstract
Cancer is one of the most serious health problems of our time. One approach for automatically classifying tumor samples is to analyze derived molecular information. Previous work by Teixeira et al. compared different methods of Data Oversampling and Feature Reduction, as well as Deep (Stacked) Denoising Autoencoders followed by a shallow layer for classification. In this work, we compare the performance of 6 different types of Autoencoder (AE), combined with two different approaches when training the classification model: (a) fixing the weights, after pretraining an AE, and (b) allowing fine-tuning of the entire network. We also apply two different strategies for embedding the AE into the classification network: (1) by only importing the encoding layers, and (2) by importing the complete AE. Our best result was the combination of unsupervised feature learning through a single-layer Denoising AE, followed by its complete import into the classification network, and subsequent fine-tuning through supervised training, achieving an F1 score of 99.61% +/- 0.54. We conclude that a reconstruction of the input space, combined with a deeper classification network outperforms previous work, without resorting to data augmentation techniques.

  • 175
  • 370