Cookies Policy
We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out More
Close
  • Menu
About

About

Luis F. Teixeira holds a Ph.D. in Electrical and Computer Engineering from Universidade do Porto in the area of computer vision (2009). Currently he is an Assistant Professor at the Department of Informatics Engineering, Faculdade de Engenharia da Universidade do Porto, and a researcher at INESC TEC. Previously he was a researcher at INESC Porto (2001-2008), Visiting Researcher at the University of Victoria (2006), and Senior Scientist at Fraunhofer AICOS (2008-2013). His current research interest include: computer vision, machine learning and interactive systems.

Interest
Topics
Details

Details

Publications

2018

Human-robot interaction based on gestures for service robots

Authors
de Sousa, P; Esteves, T; Campos, D; Duarte, F; Santos, J; Leao, J; Xavier, J; de Matos, L; Camarneiro, M; Penas, M; Miranda, M; Silva, R; Neves, AJR; Teixeira, L;

Publication
Lecture Notes in Computational Vision and Biomechanics

Abstract
Gesture recognition is very important for Human-Robot Interfaces. In this paper, we present a novel depth based method for gesture recognition to improve the interaction of a service robot autonomous shopping cart, mostly used by reduced mobility people. In the proposed solution, the identification of the user is already implemented by the software present on the robot where a bounding box focusing on the user is extracted. Based on the analysis of the depth histogram, the distance from the user to the robot is calculated and the user is segmented using from the background. Then, a region growing algorithm is applied to delete all other objects in the image. We apply again a threshold technique to the original image, to obtain all the objects in front of the user. Intercepting the threshold based segmentation result with the region growing resulting image, we obtain candidate objects to be arms of the user. By applying a labelling algorithm to obtain each object individually, a Principal Component Analysis is computed to each one to obtain its center and orientation. Using that information, we intercept the silhouette of the arm with a line obtaining the upper point of the interception which indicates the hand position. A Kalman filter is then applied to track the hand and based on state machines to describe gestures (Start, Stop, Pause) we perform gesture recognition. We tested the proposed approach in a real case scenario with different users and we obtained an accuracy around 89,7%. © 2018, Springer International Publishing AG.

2018

Autoencoders as Weight Initialization of Deep Classification Networks Applied to Papillary Thyroid Carcinoma

Authors
Ferreira, MF; Camacho, R; Teixeira, LF;

Publication
PROCEEDINGS 2018 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM)

Abstract
Cancer is one of the most serious health problems of our time. One approach for automatically classifying tumor samples is to analyze derived molecular information. Previous work by Teixeira et al. compared different methods of Data Oversampling and Feature Reduction, as well as Deep (Stacked) Denoising Autoencoders followed by a shallow layer for classification. In this work, we compare the performance of 6 different types of Autoencoder (AE), combined with two different approaches when training the classification model: (a) fixing the weights, after pretraining an AE, and (b) allowing fine-tuning of the entire network. We also apply two different strategies for embedding the AE into the classification network: (1) by only importing the encoding layers, and (2) by importing the complete AE. Our best result was the combination of unsupervised feature learning through a single-layer Denoising AE, followed by its complete import into the classification network, and subsequent fine-tuning through supervised training, achieving an F1 score of 99.61% +/- 0.54. We conclude that a reconstruction of the input space, combined with a deeper classification network outperforms previous work, without resorting to data augmentation techniques.

2017

Pre-trained convolutional networks and generative statistical models: A comparative study in large datasets

Authors
Michael, J; Teixeira, LF;

Publication
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Abstract
This study explored the viability of out-the-box, pre-trained ConvNet models as a tool to generate features for large-scale classification tasks. A juxtaposition with generative methods for vocabulary generation was drawn. Both methods were chosen in an attempt to integrate other datasets (transfer learning) and unlabelled data, respectively. Both methods were used together, studying the viability of a ConvNet model to estimate category labels of unlabelled images. All experiments pertaining to this study were carried out over a two-class set, later expanded into a 5-category dataset. The pre-trained models used were obtained from the Caffe Model Zoo. The study showed that the pre-trained model achieved best results for the binary dataset, with an accuracy of 0.945. However, for the 5-class dataset, generative vocabularies outperformed the ConvNet (0.91 vs. 0.861). Furthermore, when replacing labelled images with unlabelled ones during training, acceptable accuracy scores were obtained (as high as 0.903). Additionally, it was observed that linear kernels perform particularly well when utilized with generative models. This was especially relevant when compared to ConvNets, which require days of training even when utilizing multiple GPUs for computations. © Springer International Publishing AG 2017.

2016

Visual-Inertial Based Autonomous Navigation

Authors
Martins, FD; Teixeira, LF; Nobrega, R;

Publication
ROBOT 2015: SECOND IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 2

Abstract
This paper presents an autonomous navigation and position estimation framework which enables an Unmanned Aerial Vehicle (UAV) to possess the ability to safely navigate in indoor environments. This system uses both the on-board Inertial Measurement Unit (IMU) and the front camera of a AR. Drone platform and a laptop computer were all the data is processed. The system is composed of the following modules: navigation, door detection and position estimation. For the navigation part, the system relies on the detection of the vanishing point using the Hough transform for wall detection and avoidance. The door detection part relies not only on the detection of the contours but also on the recesses of each door using the latter as the main detector and the former as an additional validation for a higher precision. For the position estimation part, the system relies on pre-coded information of the floor in which the drone is navigating, and the velocity of the drone provided by its IMU. Several flight experiments show that the drone is able to safely navigate in corridors while detecting evident doors and estimate its position. The developed navigation and door detection methods are reliable and enable an UAV to fly without the need of human intervention.

2016

User interface design guidelines for smartphone applications for people with Parkinson's disease

Authors
Nunes, F; Silva, PA; Cevada, J; Barros, AC; Teixeira, L;

Publication
UNIVERSAL ACCESS IN THE INFORMATION SOCIETY

Abstract
Parkinson's disease (PD) is often responsible for difficulties in interacting with smartphones; however, research has not yet addressed these issues and how these challenge people with Parkinson's (PwP). This paper specifically investigates the symptoms and characteristics of PD that may influence the interaction with smartphones to then contribute in this direction. The research was based on a literature review of PD symptoms, eight semi-structured interviews with healthcare professionals and observations of PwP, and usability experiments with 39 PwP. Contributions include a list of PD symptoms that may influence the interaction with smartphones, a set of experimental results that evaluated the performance of four gestures tap, swipe, multiple-tap, and drag and 12 user interface design guidelines for creating smartphone user interfaces for PwP. Findings contribute to the work of researchers and practitioners' alike engaged in designing user interfaces for PwP or the broader area of inclusive design.

Supervised
thesis

2017

Sistema de análise e validação dos dados empresariais em vários organismos de registo a nível mundial

Author
Rui Filipe Fernandes Santos

Institution
UP-FEUP

2017

Statistical Comparison of Different Machine-Learning Approaches for Malaria Parasites Detection in Microscopic Images

Author
Mafalda Falcão Torres Veiga de Ferreira

Institution
UP-FEUP

2017

Screening tool to assess the risk of falling

Author
Alcino João Silva de Sousa

Institution
UP-FEUP

2017

Transmedia Storytelling no B2B: o caso de estudo do StorySD

Author
Ana Filipa Sousa Alves

Institution
UP-FEUP

2017

Programação Visual para a definição de um estúdio televisivo na cloud baseado na tecnologia IP

Author
António Paulo Rodrigues Presa

Institution
UP-FEUP