Cookies Policy
We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out More
Close
  • Menu
About

About

Luis F. Teixeira holds a Ph.D. in Electrical and Computer Engineering from Universidade do Porto in the area of computer vision (2009). Currently he is an Assistant Professor at the Department of Informatics Engineering, Faculdade de Engenharia da Universidade do Porto, and a researcher at INESC TEC. Previously he was a researcher at INESC Porto (2001-2008), Visiting Researcher at the University of Victoria (2006), and Senior Scientist at Fraunhofer AICOS (2008-2013). His current research interest include: computer vision, machine learning and interactive systems.

Interest
Topics
Details

Details

Publications

2020

Understanding the Impact of Artificial Intelligence on Services

Authors
Ferreira, P; Teixeira, JG; Teixeira, LF;

Publication
Lecture Notes in Business Information Processing

Abstract
Services are the backbone of modern economies and are increasingly supported by technology. Meanwhile, there is an accelerated growth of new technologies that are able to learn from themselves, providing more and more relevant results, i.e. Artificial Intelligence (AI). While there have been significant advances on the capabilities of AI, the impacts of this technology on service provision are still unknown. Conceptual research claims that AI offers a way to augment human capabilities or position it as a threat to human jobs. The objective of this study is to better understand the impact of AI on service, namely by understanding current trends in AI, and how they are, and will, impact service provision. To achieve this, a qualitative study, following Grounded Theory methodology was performed, with ten Artificial Intelligence experts selected from industry and academia. © Springer Nature Switzerland AG 2020.

2020

Deep Learning for Interictal Epileptiform Discharge Detection from Scalp EEG Recordings

Authors
Lourenço, C; Tjepkema Cloostermans, MC; Teixeira, LF; van Putten, MJAM;

Publication
IFMBE Proceedings

Abstract
Interictal Epileptiform Discharge (IED) detection in EEG signals is widely used in the diagnosis of epilepsy. Visual analysis of EEGs by experts remains the gold standard, outperforming current computer algorithms. Deep learning methods can be an automated way to perform this task. We trained a VGG network using 2-s EEG epochs from patients with focal and generalized epilepsy (39 and 40 patients, respectively, 1977 epochs total) and 53 normal controls (110770 epochs). Five-fold cross-validation was performed on the training set. Model performance was assessed on an independent set (734 IEDs from 20 patients with focal and generalized epilepsy and 23040 normal epochs from 14 controls). Network visualization techniques (filter visualization and occlusion) were applied. The VGG yielded an Area Under the ROC Curve (AUC) of 0.96 (95% Confidence Interval (CI) = 0.95 - 0.97). At 99% specificity, the sensitivity was 79% and only one sample was misclassified per two minutes of analyzed EEG. Filter visualization showed that filters from higher level layers display patches of activity indicative of IED detection. Occlusion showed that the model correctly identified IED shapes. We show that deep neural networks can reliably identify IEDs, which may lead to a fundamental shift in clinical EEG analysis. © 2020, Springer Nature Switzerland AG.

2019

Garmnet: improving global with local perception for robotic laundry folding

Authors
Fernandes Gomes, D; Luo, S; Teixeira, LF;

Publication
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Abstract
Developing autonomous assistants to help with domestic tasks is a vital topic in robotics research. Among these tasks, garment folding is one of them that is still far from being achieved mainly due to the large number of possible configurations that a crumpled piece of clothing may exhibit. Research has been done on either estimating the pose of the garment as a whole or detecting the landmarks for grasping separately. However, such works constrain the capability of the robots to perceive the states of the garment by limiting the representations for one single task. In this paper, we propose a novel end-to-end deep learning model named GarmNet that is able to simultaneously localize the garment and detect landmarks for grasping. The localization of the garment represents the global information for recognising the category of the garment, whereas the detection of landmarks can facilitate subsequent grasping actions. We train and evaluate our proposed GarmNet model using the CloPeMa Garment dataset that contains 3,330 images of different garment types in different poses. The experiments show that the inclusion of landmark detection (GarmNet-B) can largely improve the garment localization, with an error rate of 24.7% lower. Solutions as ours are important for robotics applications, as these offer scalable to many classes, memory and processing efficient solutions. © Springer Nature Switzerland AG 2019.

2019

Towards a Joint Approach to Produce Decisions and Explanations Using CNNs

Authors
Rio Torto, I; Fernandes, K; Teixeira, LF;

Publication
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Abstract
Convolutional Neural Networks, as well as other deep learning methods, have shown remarkable performance on tasks like classification and detection. However, these models largely remain black-boxes. With the widespread use of such networks in real-world scenarios and with the growing demand of the right to explanation, especially in highly-regulated areas like medicine and criminal justice, generating accurate predictions is no longer enough. Machine learning models have to be explainable, i.e., understandable to humans, which entails being able to present the reasons behind their decisions. While most of the literature focuses on post-model methods, we propose an in-model CNN architecture, composed by an explainer and a classifier. The model is trained end-to-end, with the classifier taking as input not only images from the dataset but also the explainer’s resulting explanation, thus allowing for the classifier to focus on the relevant areas of such explanation. We also developed a synthetic dataset generation framework, that allows for automatic annotation and creation of easy-to-understand images that do not require the knowledge of an expert to be explained. Promising results were obtained, especially when using L1 regularisation, validating the potential of the proposed architecture and further encouraging research to improve the proposed architecture’s explainability and performance. © 2019, Springer Nature Switzerland AG.

2018

Human-robot interaction based on gestures for service robots

Authors
de Sousa, P; Esteves, T; Campos, D; Duarte, F; Santos, J; Leao, J; Xavier, J; de Matos, L; Camarneiro, M; Penas, M; Miranda, M; Silva, R; Neves, AJR; Teixeira, L;

Publication
Lecture Notes in Computational Vision and Biomechanics

Abstract
Gesture recognition is very important for Human-Robot Interfaces. In this paper, we present a novel depth based method for gesture recognition to improve the interaction of a service robot autonomous shopping cart, mostly used by reduced mobility people. In the proposed solution, the identification of the user is already implemented by the software present on the robot where a bounding box focusing on the user is extracted. Based on the analysis of the depth histogram, the distance from the user to the robot is calculated and the user is segmented using from the background. Then, a region growing algorithm is applied to delete all other objects in the image. We apply again a threshold technique to the original image, to obtain all the objects in front of the user. Intercepting the threshold based segmentation result with the region growing resulting image, we obtain candidate objects to be arms of the user. By applying a labelling algorithm to obtain each object individually, a Principal Component Analysis is computed to each one to obtain its center and orientation. Using that information, we intercept the silhouette of the arm with a line obtaining the upper point of the interception which indicates the hand position. A Kalman filter is then applied to track the hand and based on state machines to describe gestures (Start, Stop, Pause) we perform gesture recognition. We tested the proposed approach in a real case scenario with different users and we obtained an accuracy around 89,7%. © 2018, Springer International Publishing AG.

Supervised
thesis

2017

Transmedia Storytelling no B2B: o caso de estudo do StorySD

Author
Ana Filipa Sousa Alves

Institution
UP-FEUP

2017

Sistema de Gestão e Apoio à Produção

Author
Joana Peneda Paiva Cubal de Almeida

Institution
UP-FEUP

2017

Gesture Recognition for Human-Robot Interaction for Service Robots

Author
Patrick de Sousa

Institution
UP-FEUP

2017

Sistema de análise e validação dos dados empresariais em vários organismos de registo a nível mundial

Author
Rui Filipe Fernandes Santos

Institution
UP-FEUP

2017

Screening tool to assess the risk of falling

Author
Alcino João Silva de Sousa

Institution
UP-FEUP