Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Luis F. Teixeira é doutorado em Engenharia Electrotécnica e de Computadores pela Universidade do Porto na área de visão computacional (2009). Actualmente é Professor Auxiliar no Departamento de Engenharia Informática na Faculdade de Engenharia da Universidade do Porto e investigador no INESC TEC. Anteriormente foi investigador no INESC Porto (2001-2008), Visiting Researcher na University of Victoria (2006), e Senior Scientist no Fraunhofer AICOS (2008-2013). Os seus interesses de investigação actuais incluem: visão computacional, aprendizagem automática e sistemas interactivos.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Luís Filipe Teixeira
  • Cargo

    Investigador Sénior
  • Desde

    17 setembro 2001
003
Publicações

2024

Explainable Deep Learning Methods in Medical Image Classification: A Survey

Autores
Patrício, C; Neves, C; Teixeira, F;

Publicação
ACM COMPUTING SURVEYS

Abstract
The remarkable success of deep learning has prompted interest in its application to medical imaging diagnosis. Even though state-of-the-art deep learning models have achieved human-level accuracy on the classification of different types of medical data, these models are hardly adopted in clinical workflows, mainly due to their lack of interpretability. The black-box nature of deep learning models has raised the need for devising strategies to explain the decision process of these models, leading to the creation of the topic of eXplainable Artificial Intelligence (XAI). In this context, we provide a thorough survey of XAI applied to medical imaging diagnosis, including visual, textual, example-based and concept-based explanation methods. Moreover, this work reviews the existing medical imaging datasets and the existing metrics for evaluating the quality of the explanations. In addition, we include a performance comparison among a set of report generation-based methods. Finally, the major challenges in applying XAI to medical imaging and the future research directions on the topic are discussed.

2023

Deep learning-based human action recognition to leverage context awareness in collaborative assembly

Autores
Moutinho, D; Rocha, LF; Costa, CM; Teixeira, LF; Veiga, G;

Publicação
ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING

Abstract
Human-Robot Collaboration is a critical component of Industry 4.0, contributing to a transition towards more flexible production systems that are quickly adjustable to changing production requirements. This paper aims to increase the natural collaboration level of a robotic engine assembly station by proposing a cognitive system powered by computer vision and deep learning to interpret implicit communication cues of the operator. The proposed system, which is based on a residual convolutional neural network with 34 layers and a long -short term memory recurrent neural network (ResNet-34 + LSTM), obtains assembly context through action recognition of the tasks performed by the operator. The assembly context was then integrated in a collaborative assembly plan capable of autonomously commanding the robot tasks. The proposed model showed a great performance, achieving an accuracy of 96.65% and a temporal mean intersection over union (mIoU) of 94.11% for the action recognition of the considered assembly. Moreover, a task-oriented evaluation showed that the proposed cognitive system was able to leverage the performed human action recognition to command the adequate robot actions with near-perfect accuracy. As such, the proposed system was considered as successful at increasing the natural collaboration level of the considered assembly station.

2023

GASTeN: Generative Adversarial Stress Test Networks

Autores
Cunha, L; Soares, C; Restivo, A; Teixeira, LF;

Publicação
ADVANCES IN INTELLIGENT DATA ANALYSIS XXI, IDA 2023

Abstract
Concerns with the interpretability of ML models are growing as the technology is used in increasingly sensitive domains (e.g., health and public administration). Synthetic data can be used to understand models better, for instance, if the examples are generated close to the frontier between classes. However, data augmentation techniques, such as Generative Adversarial Networks (GAN), have been mostly used to generate training data that leads to better models. We propose a variation of GANs that, given a model, generates realistic data that is classified with low confidence by a given classifier. The generated examples can be used in order to gain insights on the frontier between classes. We empirically evaluate our approach on two well-known image classification benchmark datasets, MNIST and Fashion MNIST. Results show that the approach is able to generate images that are closer to the frontier when compared to the original ones, but still realistic. Manual inspection confirms that some of those images are confusing even for humans.

2023

MobileWeatherNet for LiDAR-Only Weather Estimation

Autores
da Silva, MP; Carneiro, D; Fernandes, J; Texeira, LF;

Publicação
2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN

Abstract
An autonomous vehicle relying on LiDAR data should be able to assess its limitations in real time without depending on external information or additional sensors. The point cloud generated by the sensor is subjected to significant degradation under adverse weather conditions (rain, fog, and snow), which limits the vehicle's visibility and performance. With this in mind, we show that point cloud data contains sufficient information to estimate the weather accurately and present MobileWeatherNet, a LiDAR-only convolutional neural network that uses the bird's-eye view 2D projection to extract point clouds' weather condition and improves state-of-the-art performance by 15% in terms of the balanced accuracy while reducing inference time by 63%. Moreover, this paper demonstrates that among common architectures, the use of the bird's eye view significantly enhances their performance without an increase in complexity. To the extent of our knowledge, this is the first approach that uses deep learning for weather estimation using point cloud data in the form of a bird's-eye-view projection.

2023

Coherent Concept-based Explanations in Medical Image and Its Application to Skin Lesion Diagnosis

Autores
Patrício, C; Neves, JC; Teixeira, LF;

Publicação
CoRR

Abstract

Teses
supervisionadas

2022

Human Action and Facial Expressions Recognition in a VR game

Autor
Júlio Pinto de Castro Lopes

Instituição
UP-FEUP

2022

Unconstrained Human Pose Estimation to Support Breast Cancer Survivor's Prospective Surveillance

Autor
João Pedro da Silva Monteiro

Instituição
UP-FEUP

2022

GASTeN: Generative Adversarial Stress Test Networks

Autor
Luís Pedro Pereira Lopes Mascarenhas Cunha

Instituição
UP-FEUP

2022

Learning to write medical reports from EEG data

Autor
Ana Maria Amaro de Sousa

Instituição
UP-FEUP

2022

6DoF tool path generator from CAD model for visual inspection of part surfaces

Autor
Luís Rodrigues de Castro

Instituição
UP-FEUP