Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Davide Carneiro é Professor Coordenador na Escola Superior de Tecnologia e Gestão do Instituto Politécnico do Porto. É também investigador integrado no INESC TEC. Tem o grau de Doutor com Menção Europeia atribuído conjuntamente pelas Universidades de Minho, Aveiro e Porto em 2013, através do Programa Doutoral MAP-i. Desenvolve investigação científica em áreas de aplicação da Inteligência Artificial e das Ciências dos Dados, incluindo na Resolução Alternativa de Conflitos, Interação Homem-Computador e Deteção de Fraude. Interessa-se ainda por problemas relacionados com meta-learning e explicabilidade, e como estes podem ser utilizados no contexto de problemas reais. Nos últimos anos participou em vários projetos de investigação financiados nas áreas de Inteligência Artificial, Inteligência Ambiente, Resolução Alternativa de Conflitos e Deteção de Fraude. Foi coordenador científico do projeto Neurat (NORTE-01-0247-FEDER-039900) e é coordenador institucional do projeto europeu EJUST ODR Scheme (JUST-2021-EJUSTICE, 101046468). A nível nacional é Investigador Principal dos projetos CEDEs - Continuously Evolving Distributed Ensembles (EXPL/CCI-COM/0706/2021) e xAIDMLS (CPCA-IAC/AV/475278/2022), financiados pela FCT. É ainda atualmente investigador nos projetos europeus FACILITATE-AI e PRIVATEER.

É autor de mais de 150 publicações científicas nas suas áreas de investigação, incluindo a autoria de um livro de cariz científico, três livros sob a forma editada, e mais de 140 capítulos de livro, publicações em revistas internacionais indexadas, e artigos em atas de conferências. Em paralelo, dedica-se ainda fortemente à orientação científica de Estudantes, envolvendo-os sempre que possível em tarefas práticas integradas nos projetos de investigação em que participa.

Davide é co-fundador da AnyBrain, uma startup portuguesa no campo da Interação Homem Computador. A empresa desenvolve software para a deteção de fadiga em ambientes de escritório, (https://performetric.net/), para a análise de performance em eSports (https://performetric.gg/), e para identificação de jogadores e deteção de fraude em eSports (https://anybrain.gg/).

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Davide Rua Carneiro
  • Cargo

    Investigador Sénior
  • Desde

    01 agosto 2022
006
Publicações

2025

Using Explanations to Estimate the Quality of Computer Vision Models

Autores
Oliveira, F; Carneiro, D; Pereira, J;

Publicação
HUMAN-CENTRED TECHNOLOGY MANAGEMENT FOR A SUSTAINABLE FUTURE, VOL 2, IAMOT

Abstract
Explainable AI (xAI) emerged as one of the ways of addressing the interpretability issues of the so-called black-box models. Most of the xAI artifacts proposed so far were designed, as expected, for human users. In this work, we posit that such artifacts can also be used by computer systems. Specifically, we propose a set of metrics derived from LIME explanations, that can eventually be used to ascertain the quality of each output of an underlying image classification model. We validate these metrics against quantitative human feedback, and identify 4 potentially interesting metrics for this purpose. This research is particularly useful in concept drift scenarios, in which models are deployed into production and there is no new labelled data to continuously evaluate them, becoming impossible to know the current performance of the model.

2025

Development of a Non-Invasive Clinical Machine Learning System for Arterial Pulse Wave Velocity Estimation

Autores
Martinez-Rodrigo, A; Pedrosa, J; Carneiro, D; Cavero-Redondo, I; Saz-Lara, A;

Publicação
APPLIED SCIENCES-BASEL

Abstract
Arterial stiffness (AS) is a well-established predictor of cardiovascular events, including myocardial infarction and stroke. One of the most recognized methods for assessing AS is through arterial pulse wave velocity (aPWV), which provides valuable clinical insights into vascular health. However, its measurement typically requires specialized equipment, making it inaccessible in primary healthcare centers and low-resource settings. In this study, we developed and validated different machine learning models to estimate aPWV using common clinical markers routinely collected in standard medical examinations. Thus, we trained five regression models: Linear Regression, Polynomial Regression (PR), Gradient Boosting Regression, Support Vector Regression, and Neural Networks (NNs) on the EVasCu dataset, a cohort of apparently healthy individuals. A 10-fold cross-validation demonstrated that PR and NN achieved the highest predictive performance, effectively capturing nonlinear relationships in the data. External validation on two independent datasets, VascuNET (a healthy population) and ExIC-FEp (a cohort of cardiopathic patients), confirmed the robustness of PR and NN (R- (2)> 0.90) across different vascular conditions. These results indicate that by using easily accessible clinical variables and AI-driven insights, it is possible to develop a cost-effective tool for aPWV estimation, enabling early cardiovascular risk stratification in underserved and rural areas where specialized AS measurement devices are unavailable.

2024

Block size, parallelism and predictive performance: finding the sweet spot in distributed learning

Autores
Oliveira, F; Carneiro, D; Guimaraes, M; Oliveira, O; Novais, P;

Publicação
INTERNATIONAL JOURNAL OF PARALLEL EMERGENT AND DISTRIBUTED SYSTEMS

Abstract
As distributed and multi-organization Machine Learning emerges, new challenges must be solved, such as diverse and low-quality data or real-time delivery. In this paper, we use a distributed learning environment to analyze the relationship between block size, parallelism, and predictor quality. Specifically, the goal is to find the optimum block size and the best heuristic to create distributed Ensembles. We evaluated three different heuristics and five block sizes on four publicly available datasets. Results show that using fewer but better base models matches or outperforms a standard Random Forest, and that 32 MB is the best block size.

2024

Supervised and unsupervised techniques in textile quality inspections

Autores
Ferreira, HM; Carneiro, DR; Guimaraes, MA; Oliveira, FV;

Publicação
5TH INTERNATIONAL CONFERENCE ON INDUSTRY 4.0 AND SMART MANUFACTURING, ISM 2023

Abstract
Quality inspection is a critical step in ensuring the quality and efficiency of textile production processes. With the increasing complexity and scale of modern textile manufacturing systems, the need for accurate and efficient quality inspection and defect detection techniques has become paramount. This paper compares supervised and unsupervised Machine Learning techniques for defect detection in the context of industrial textile production, in terms of their respective advantages and disadvantages, and their implementation and computational costs. We explore the use of an autoencoder for the detection of defects in textiles. The goal of this preliminary work is to find out if unsupervised methods can successfully train models with good performance without the need for defect labelled data. (c) 2023 The Authors. Published by Elsevier B.V.

2024

Application of Meta Learning in Quality Assessment of Wearable Electrocardiogram Recordings

Autores
Huerta, A; Martínez-Rodrigo, A; Guimarâes, M; Carneiro, D; Rieta, JJ; Alcaraz, R;

Publicação
ADVANCES IN DIGITAL HEALTH AND MEDICAL BIOENGINEERING, VOL 2, EHB-2023

Abstract
The high rates of mortality provoked by cardiovascular disorders (CVDs) have been rated by the OMS in the top among non-communicable diseases, killing about 18 million people annually. It is crucial to detect arrhythmias or cardiovascular events in an early way. For that purpose, novel portable acquisition devices have allowed long-term electrocardiographic (ECG) recording, being the most common way to discover arrhythmias of a random nature such as atrial fibrillation (AF). Nonetheless, the acquisition environment can distort or even destroy the ECG recordings, hindering the proper diagnosis of CVDs. Thus, it is necessary to assess the ECG signal quality in an automatic way. The proposed approach exploits the feature and meta-feature extraction of 5-s ECG segments with the ability of machine learning classifiers to discern between high- and low-quality ECG segments. Three different approaches were tested, reaching values of accuracy close to 83% using the original feature set and improving up to 90% when all the available meta-features were utilized. Moreover, within the high-quality group, the segments belonging to the AF class outperformed around 7% until a rate over 85% when the meta-features set was used. The extraction of meta-features improves the accuracy even when a subset of meta-features is selected from the whole set.