Cookies
Usamos cookies para melhorar nosso site e a sua experiência. Ao continuar a navegar no site, você aceita a nossa política de cookies. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Luis F. Teixeira é doutorado em Engenharia Electrotécnica e de Computadores pela Universidade do Porto na área de visão computacional (2009). Actualmente é Professor Auxiliar no Departamento de Engenharia Informática na Faculdade de Engenharia da Universidade do Porto e investigador no INESC TEC. Anteriormente foi investigador no INESC Porto (2001-2008), Visiting Researcher na University of Victoria (2006), e Senior Scientist no Fraunhofer AICOS (2008-2013). Os seus interesses de investigação actuais incluem: visão computacional, aprendizagem automática e sistemas interactivos.

Tópicos
de interesse
Detalhes

Detalhes

001
Publicações

2020

Understanding the Impact of Artificial Intelligence on Services

Autores
Ferreira, P; Teixeira, JG; Teixeira, LF;

Publicação
Lecture Notes in Business Information Processing

Abstract
Services are the backbone of modern economies and are increasingly supported by technology. Meanwhile, there is an accelerated growth of new technologies that are able to learn from themselves, providing more and more relevant results, i.e. Artificial Intelligence (AI). While there have been significant advances on the capabilities of AI, the impacts of this technology on service provision are still unknown. Conceptual research claims that AI offers a way to augment human capabilities or position it as a threat to human jobs. The objective of this study is to better understand the impact of AI on service, namely by understanding current trends in AI, and how they are, and will, impact service provision. To achieve this, a qualitative study, following Grounded Theory methodology was performed, with ten Artificial Intelligence experts selected from industry and academia. © Springer Nature Switzerland AG 2020.

2020

Deep Learning for Interictal Epileptiform Discharge Detection from Scalp EEG Recordings

Autores
Lourenço, C; Tjepkema Cloostermans, MC; Teixeira, LF; van Putten, MJAM;

Publicação
IFMBE Proceedings

Abstract
Interictal Epileptiform Discharge (IED) detection in EEG signals is widely used in the diagnosis of epilepsy. Visual analysis of EEGs by experts remains the gold standard, outperforming current computer algorithms. Deep learning methods can be an automated way to perform this task. We trained a VGG network using 2-s EEG epochs from patients with focal and generalized epilepsy (39 and 40 patients, respectively, 1977 epochs total) and 53 normal controls (110770 epochs). Five-fold cross-validation was performed on the training set. Model performance was assessed on an independent set (734 IEDs from 20 patients with focal and generalized epilepsy and 23040 normal epochs from 14 controls). Network visualization techniques (filter visualization and occlusion) were applied. The VGG yielded an Area Under the ROC Curve (AUC) of 0.96 (95% Confidence Interval (CI) = 0.95 - 0.97). At 99% specificity, the sensitivity was 79% and only one sample was misclassified per two minutes of analyzed EEG. Filter visualization showed that filters from higher level layers display patches of activity indicative of IED detection. Occlusion showed that the model correctly identified IED shapes. We show that deep neural networks can reliably identify IEDs, which may lead to a fundamental shift in clinical EEG analysis. © 2020, Springer Nature Switzerland AG.

2020

Understanding the decisions of CNNs: an in-model approach

Autores
Rio-Torto, I; Fernandes, K; Teixeira, LF;

Publicação
Pattern Recognition Letters

Abstract

2020

Understanding the decisions of CNNs: An in-model approach

Autores
Rio Torto, I; Fernandes, K; Teixeira, LF;

Publicação
PATTERN RECOGNITION LETTERS

Abstract
With the outstanding predictive performance of Convolutional Neural Networks on different tasks and their widespread use in real-world scenarios, it is essential to understand and trust these black-box models. While most of the literature focuses on post-model methods, we propose a novel in-model joint architecture, composed by an explainer and a classifier. This architecture outputs not only a class label, but also a visual explanation of such decision, without the need for additional labelled data to train the explainer besides the image class. The model is trained end-to-end, with the classifier taking as input an image and the explainer's resulting explanation, thus allowing for the classifier to focus on the relevant areas of such explanation. Moreover, this approach can be employed with any classifier, provided that the necessary connections to the explainer are made. We also propose a three-phase training process and two alternative custom loss functions that regularise the produced explanations and encourage desired properties, such as sparsity and spatial contiguity. The architecture was validated in two datasets (a subset of ImageNet and a cervical cancer dataset) and the obtained results show that it is able to produce meaningful image- and class-dependent visual explanations, without direct supervision, aligned with intuitive visual features associated with the data. Quantitative assessment of explanation quality was conducted through iterative perturbation of the input image according to the explanation heatmaps. The impact on classification performance is studied in terms of average function value and AOPC (Area Over the MoRF (Most Relevant First) Curve). For further evaluation, we propose POMPOM (Percentage of Meaningful Pixels Outside the Mask) as another measurable criteria of explanation goodness. These analyses showed that the proposed method outperformed state-of-the-art post-model methods, such as LRP (Layer-wise Relevance Propagation).

2019

Garmnet: improving global with local perception for robotic laundry folding

Autores
Fernandes Gomes, D; Luo, S; Teixeira, LF;

Publicação
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Abstract
Developing autonomous assistants to help with domestic tasks is a vital topic in robotics research. Among these tasks, garment folding is one of them that is still far from being achieved mainly due to the large number of possible configurations that a crumpled piece of clothing may exhibit. Research has been done on either estimating the pose of the garment as a whole or detecting the landmarks for grasping separately. However, such works constrain the capability of the robots to perceive the states of the garment by limiting the representations for one single task. In this paper, we propose a novel end-to-end deep learning model named GarmNet that is able to simultaneously localize the garment and detect landmarks for grasping. The localization of the garment represents the global information for recognising the category of the garment, whereas the detection of landmarks can facilitate subsequent grasping actions. We train and evaluate our proposed GarmNet model using the CloPeMa Garment dataset that contains 3,330 images of different garment types in different poses. The experiments show that the inclusion of landmark detection (GarmNet-B) can largely improve the garment localization, with an error rate of 24.7% lower. Solutions as ours are important for robotics applications, as these offer scalable to many classes, memory and processing efficient solutions. © Springer Nature Switzerland AG 2019.

Teses
supervisionadas

2019

Forecasting stock trends through Machine Learning

Autor
José Diogo Teixeira de Sousa Seca

Instituição
UP-FEUP

2019

Framework for genomic based cancer studies using Machine Learning algorithms

Autor
João Alexandre Gonçalinho Loureiro

Instituição
UP-FEUP

2019

DL4Malaria: Deep Learning Approaches for the Automated Detection and Characterisation of Malaria Parasites on Thin Blood Smear Images

Autor
Ana Filipa Teixeira Sampaio

Instituição
UP-FEUP

2019

Deep Learning for identification and quantification of oncocytic cells in microscopic images

Autor
Luís Telmo Soares Costa

Instituição
UP-FEUP

2019

Deep Learning for EEG Analysis in Epilepsy

Autor
Catarina da Silva Lourenço

Instituição
UP-FEUP