Cookies Policy
We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out More
Close
  • Menu
About

About

Luis F. Teixeira holds a Ph.D. in Electrical and Computer Engineering from Universidade do Porto in the area of computer vision (2009). Currently he is an Assistant Professor at the Department of Informatics Engineering, Faculdade de Engenharia da Universidade do Porto, and a researcher at INESC TEC. Previously he was a researcher at INESC Porto (2001-2008), Visiting Researcher at the University of Victoria (2006), and Senior Scientist at Fraunhofer AICOS (2008-2013). His current research interest include: computer vision, machine learning and interactive systems.

Interest
Topics
Details

Details

Publications

2018

Human-robot interaction based on gestures for service robots

Authors
de Sousa, P; Esteves, T; Campos, D; Duarte, F; Santos, J; Leao, J; Xavier, J; de Matos, L; Camarneiro, M; Penas, M; Miranda, M; Silva, R; Neves, AJR; Teixeira, L;

Publication
Lecture Notes in Computational Vision and Biomechanics

Abstract
Gesture recognition is very important for Human-Robot Interfaces. In this paper, we present a novel depth based method for gesture recognition to improve the interaction of a service robot autonomous shopping cart, mostly used by reduced mobility people. In the proposed solution, the identification of the user is already implemented by the software present on the robot where a bounding box focusing on the user is extracted. Based on the analysis of the depth histogram, the distance from the user to the robot is calculated and the user is segmented using from the background. Then, a region growing algorithm is applied to delete all other objects in the image. We apply again a threshold technique to the original image, to obtain all the objects in front of the user. Intercepting the threshold based segmentation result with the region growing resulting image, we obtain candidate objects to be arms of the user. By applying a labelling algorithm to obtain each object individually, a Principal Component Analysis is computed to each one to obtain its center and orientation. Using that information, we intercept the silhouette of the arm with a line obtaining the upper point of the interception which indicates the hand position. A Kalman filter is then applied to track the hand and based on state machines to describe gestures (Start, Stop, Pause) we perform gesture recognition. We tested the proposed approach in a real case scenario with different users and we obtained an accuracy around 89,7%. © 2018, Springer International Publishing AG.

2017

Pre-trained convolutional networks and generative statistical models: A comparative study in large datasets

Authors
Michael, J; Teixeira, LF;

Publication
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Abstract
This study explored the viability of out-the-box, pre-trained ConvNet models as a tool to generate features for large-scale classification tasks. A juxtaposition with generative methods for vocabulary generation was drawn. Both methods were chosen in an attempt to integrate other datasets (transfer learning) and unlabelled data, respectively. Both methods were used together, studying the viability of a ConvNet model to estimate category labels of unlabelled images. All experiments pertaining to this study were carried out over a two-class set, later expanded into a 5-category dataset. The pre-trained models used were obtained from the Caffe Model Zoo. The study showed that the pre-trained model achieved best results for the binary dataset, with an accuracy of 0.945. However, for the 5-class dataset, generative vocabularies outperformed the ConvNet (0.91 vs. 0.861). Furthermore, when replacing labelled images with unlabelled ones during training, acceptable accuracy scores were obtained (as high as 0.903). Additionally, it was observed that linear kernels perform particularly well when utilized with generative models. This was especially relevant when compared to ConvNets, which require days of training even when utilizing multiple GPUs for computations. © Springer International Publishing AG 2017.

2016

Visual-Inertial Based Autonomous Navigation

Authors
Martins, FD; Teixeira, LF; Nobrega, R;

Publication
ROBOT 2015: SECOND IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 2

Abstract
This paper presents an autonomous navigation and position estimation framework which enables an Unmanned Aerial Vehicle (UAV) to possess the ability to safely navigate in indoor environments. This system uses both the on-board Inertial Measurement Unit (IMU) and the front camera of a AR. Drone platform and a laptop computer were all the data is processed. The system is composed of the following modules: navigation, door detection and position estimation. For the navigation part, the system relies on the detection of the vanishing point using the Hough transform for wall detection and avoidance. The door detection part relies not only on the detection of the contours but also on the recesses of each door using the latter as the main detector and the former as an additional validation for a higher precision. For the position estimation part, the system relies on pre-coded information of the floor in which the drone is navigating, and the velocity of the drone provided by its IMU. Several flight experiments show that the drone is able to safely navigate in corridors while detecting evident doors and estimate its position. The developed navigation and door detection methods are reliable and enable an UAV to fly without the need of human intervention.

2016

User interface design guidelines for smartphone applications for people with Parkinson's disease

Authors
Nunes, F; Silva, PA; Cevada, J; Barros, AC; Teixeira, L;

Publication
UNIVERSAL ACCESS IN THE INFORMATION SOCIETY

Abstract
Parkinson's disease (PD) is often responsible for difficulties in interacting with smartphones; however, research has not yet addressed these issues and how these challenge people with Parkinson's (PwP). This paper specifically investigates the symptoms and characteristics of PD that may influence the interaction with smartphones to then contribute in this direction. The research was based on a literature review of PD symptoms, eight semi-structured interviews with healthcare professionals and observations of PwP, and usability experiments with 39 PwP. Contributions include a list of PD symptoms that may influence the interaction with smartphones, a set of experimental results that evaluated the performance of four gestures tap, swipe, multiple-tap, and drag and 12 user interface design guidelines for creating smartphone user interfaces for PwP. Findings contribute to the work of researchers and practitioners' alike engaged in designing user interfaces for PwP or the broader area of inclusive design.

2015

Automatic Analysis of Lung Function Based on Smartphone Recordings

Authors
Teixeira, JF; Teixeira, LF; Fonseca, J; Jacinto, T;

Publication
BIOMEDICAL ENGINEERING SYSTEMS AND TECHNOLOGIES, BIOSTEC 2015

Abstract
Over 250 million people, worldwide, are affected by chronic lung conditions such as Asthma and COPD. These can cause breathlessness, a harsh decrease in quality of life and, if left undetected or not properly managed, even death. In this paper, we approached part of the lines of development suggested upon earlier work. This concerned the development of a system design for a smartphone lung function classification app, which would only use recordings from the built-in microphone. A more systematic method to evaluate the relevant combinations of methods was devised and an additional set of 44 recordings was used for testing purposes. The previous 101 were kept for training the models. The results enabled to further reduce the signal processing pipeline leading to the use of 6 envelopes, per recording, half of the previous amount. An analysis of the classification performances is provided for both previous tasks: differentiation into Normal from Abnormal lung function, and between multiple lung function patterns. The results from this project encourage further development of the system.

Supervised
thesis

2017

Detecting garment and its landmarks

Author
Daniel Fernandes Gomes

Institution
UP-FEUP

2017

Sistema de Gestão e Apoio à Produção

Author
Joana Peneda Paiva Cubal de Almeida

Institution
UP-FEUP

2017

Identificação de danos em veículos sinistrados através de imagens

Author
José Pedro Lobo Marinho Trocado Moreira

Institution
UP-FEUP

2017

Gesture Recognition for Human-Robot Interaction for Service Robots

Author
Patrick de Sousa

Institution
UP-FEUP

2017

Statistical Comparison of Different Machine-Learning Approaches for Malaria Parasites Detection in Microscopic Images

Author
Mafalda Falcão Torres Veiga de Ferreira

Institution
UP-FEUP