Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Luís Filipe Teixeira

2016

Visual-Inertial Based Autonomous Navigation

Authors
Martins, FD; Teixeira, LF; Nobrega, R;

Publication
ROBOT 2015: SECOND IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 2

Abstract
This paper presents an autonomous navigation and position estimation framework which enables an Unmanned Aerial Vehicle (UAV) to possess the ability to safely navigate in indoor environments. This system uses both the on-board Inertial Measurement Unit (IMU) and the front camera of a AR. Drone platform and a laptop computer were all the data is processed. The system is composed of the following modules: navigation, door detection and position estimation. For the navigation part, the system relies on the detection of the vanishing point using the Hough transform for wall detection and avoidance. The door detection part relies not only on the detection of the contours but also on the recesses of each door using the latter as the main detector and the former as an additional validation for a higher precision. For the position estimation part, the system relies on pre-coded information of the floor in which the drone is navigating, and the velocity of the drone provided by its IMU. Several flight experiments show that the drone is able to safely navigate in corridors while detecting evident doors and estimate its position. The developed navigation and door detection methods are reliable and enable an UAV to fly without the need of human intervention.

2016

User interface design guidelines for smartphone applications for people with Parkinson's disease

Authors
Nunes, F; Silva, PA; Cevada, J; Barros, AC; Teixeira, L;

Publication
UNIVERSAL ACCESS IN THE INFORMATION SOCIETY

Abstract
Parkinson's disease (PD) is often responsible for difficulties in interacting with smartphones; however, research has not yet addressed these issues and how these challenge people with Parkinson's (PwP). This paper specifically investigates the symptoms and characteristics of PD that may influence the interaction with smartphones to then contribute in this direction. The research was based on a literature review of PD symptoms, eight semi-structured interviews with healthcare professionals and observations of PwP, and usability experiments with 39 PwP. Contributions include a list of PD symptoms that may influence the interaction with smartphones, a set of experimental results that evaluated the performance of four gestures tap, swipe, multiple-tap, and drag and 12 user interface design guidelines for creating smartphone user interfaces for PwP. Findings contribute to the work of researchers and practitioners' alike engaged in designing user interfaces for PwP or the broader area of inclusive design.

2014

Active Mining of Parallel Video Streams

Authors
Khoshrou, Samaneh; Cardoso, JaimeS.; Teixeira, LuisFilipe;

Publication
CoRR

Abstract

2015

Analysis of Expressiveness of Portuguese Sign Language Speakers

Authors
Rodrigues, IV; Pereira, EM; Teixeira, LF;

Publication
PATTERN RECOGNITION AND IMAGE ANALYSIS (IBPRIA 2015)

Abstract
Nowadays, there are several communication gaps that isolate deaf people in several social activities. This work studies the expressiveness of gestures in Portuguese Sign Language (PSL) speakers and their differences between deaf and hearing people. It is a first effort towards the ultimate goal of understanding emotional and behaviour patterns among such populations. In particular, our work designs solutions for the following problems: (i) differentiation between deaf and hearing people, (ii) identification of different conversational topics based on body expressiveness, (iii) identification of different levels of mastery of PSL speakers through feature analysis. With these aims, we build up a complete and novel dataset that reveals the duo-interaction between deaf and hearing people under several conversational topics. Results show high recognition and classification rates.

2017

Pre-trained Convolutional Networks and Generative Statistical Models: A Comparative Study in Large Datasets

Authors
Michael, J; Teixeira, LF;

Publication
PATTERN RECOGNITION AND IMAGE ANALYSIS (IBPRIA 2017)

Abstract
This study explored the viability of out-the-box, pre-trained ConvNet models as a tool to generate features for large-scale classification tasks. A juxtaposition with generative methods for vocabulary generation was drawn. Both methods were chosen in an attempt to integrate other datasets (transfer learning) and unlabelled data, respectively. Both methods were used together, studying the viability of a ConvNet model to estimate category labels of unlabelled images. All experiments pertaining to this study were carried out over a two-class set, later expanded into a 5-category dataset. The pre-trained models used were obtained from the Caffe Model Zoo. The study showed that the pre-trained model achieved best results for the binary dataset, with an accuracy of 0.945. However, for the 5-class dataset, generative vocabularies outperformed the ConvNet (0.91 vs. 0.861). Furthermore, when replacing labelled images with unlabelled ones during training, acceptable accuracy scores were obtained (as high as 0.903). Additionally, it was observed that linear kernels perform particularly well when utilized with generative models. This was especially relevant when compared to ConvNets, which require days of training even when utilizing multiple GPUs for computations.

2015

Experimental Evaluation of the Bag-of-Features Model for Unsupervised Learning of Images

Authors
Afonso, M; Teixeira, LF;

Publication
Proceedings of the British Machine Vision Conference 2015, BMVC 2015, Swansea, UK, September 7-10, 2015

Abstract

  • 2
  • 11