Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

José Manuel Oliveira tem uma Licenciatura in Matemática Aplicada à Ciência de Computadores em 1992, um Mestrado em Telecomunicações em 1996 e um Doutoramento em Ciências de Engenharia em 2005, todos na Universidade do Porto.

É Professor Auxiliar na Faculdade de Economia da Universidade do Porto, onde integra o agrupamento científico de Matemática e Sistemas de Informação. É investigador no INESC TEC desde 1992, onde desenvolve trabalho de investigação no Centro de Telecomunicações e Multimédia. Os seus interesses de investigação centram-se essencialmente nas Redes sem Fios e incluem a gestão de recursos de rádio, a autoconfiguração de redes e sistemas e o tratamento e análise de dados de comunicações.

José participou em vários projetos de investigação, incluindo os projetos europeus: FP6 DAIDALOS Phase 2, IST VESPER, IST OPIUM e ACTS SCREEN; os projetos QREN: SITMe e Portal Douro; o projeto CMU SELF-PVP; e o projeto P2020 Marecom.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    José Manuel Oliveira
  • Cargo

    Investigador Sénior
  • Desde

    01 dezembro 1992
Publicações

2025

Tax Optimization in the European Union: A Laffer Curve Perspective

Autores
Sentinelo, T; Queiros, M; Oliveira, JM; Ramos, P;

Publicação
ECONOMIES

Abstract
This study explores the applicability of the Laffer Curve in the context of the European Union (EU) by analyzing the relationship between taxation and fiscal revenue across personal income tax (PIT), corporate income tax (CIT), and value-added tax (VAT). Utilizing a comprehensive panel data set spanning 1995 to 2022 across all 27 EU member states, the research also integrates the Bird Index to assess fiscal effort and employs advanced econometric techniques, including the Hausman Test and log-quadratic regression models, to capture the non-linear dynamics of the Laffer Curve. The findings reveal that excessively high tax rates, particularly in some larger member states, may lead to revenue losses due to reduced economic activity and tax evasion, highlighting the existence of optimal tax rates that maximize revenue while sustaining economic growth. By estimating threshold tax rates and incorporating the Bird Index, the study provides a nuanced perspective on tax efficiency and fiscal sustainability, offering evidence-based policy recommendations for optimizing tax systems in the European Union to balance revenue generation with economic competitiveness.

2025

Optimizing Credit Risk Prediction for Peer-to-Peer Lending Using Machine Learning

Autores
Souadda, LI; Halitim, AR; Benilles, B; Oliveira, JM; Ramos, P;

Publicação

Abstract
This study investigates the effectiveness of different hyperparameter tuning strategies for peer-to-peer risk management. Ensemble learning techniques have shown superior performance in this field compared to individual classifiers and traditional statistical methods. However, model performance is influenced not only by the choice of algorithm but also by hyperparameter tuning, which impacts both predictive accuracy and computational efficiency. This research compares the performance and efficiency of three widely used hyperparameter tuning methods, Grid Search, Random Search, and Optuna, across XGBoost, LightGBM, and Logistic Regression models. The analysis uses the Lending Club dataset, spanning from 2007 Q1 to 2020 Q3, with comprehensive data preprocessing to address missing values, class imbalance, and feature engineering. Model explainability is assessed through feature importance analysis to identify key drivers of default probability. The findings reveal comparable predictive performance among the tuning methods, evaluated using metrics such as G-mean, sensitivity, and specificity. However, Optuna significantly outperforms the others in computational efficiency; for instance, it is 10.7 times faster than Grid Search for XGBoost and 40.5 times faster for LightGBM. Additionally, variations in feature importance rankings across tuning methods influence model interpretability and the prioritization of risk factors. These insights underscore the importance of selecting appropriate hyperparameter tuning strategies to optimize both performance and explainability in peer-to-peer risk management models.

2025

Transformer-Based Models for Probabilistic Time Series Forecasting with Explanatory Variables

Autores
Caetano, R; Oliveira, JM; Ramos, P;

Publicação
MATHEMATICS

Abstract
Accurate demand forecasting is essential for retail operations as it directly impacts supply chain efficiency, inventory management, and financial performance. However, forecasting retail time series presents significant challenges due to their irregular patterns, hierarchical structures, and strong dependence on external factors such as promotions, pricing strategies, and socio-economic conditions. This study evaluates the effectiveness of Transformer-based architectures, specifically Vanilla Transformer, Informer, Autoformer, ETSformer, NSTransformer, and Reformer, for probabilistic time series forecasting in retail. A key focus is the integration of explanatory variables, such as calendar-related indicators, selling prices, and socio-economic factors, which play a crucial role in capturing demand fluctuations. This study assesses how incorporating these variables enhances forecast accuracy, addressing a research gap in the comprehensive evaluation of explanatory variables within multiple Transformer-based models. Empirical results, based on the M5 dataset, show that incorporating explanatory variables generally improves forecasting performance. Models leveraging these variables achieve up to 12.4% reduction in Normalized Root Mean Squared Error (NRMSE) and 2.9% improvement in Mean Absolute Scaled Error (MASE) compared to models that rely solely on past sales. Furthermore, probabilistic forecasting enhances decision making by quantifying uncertainty, providing more reliable demand predictions for risk management. These findings underscore the effectiveness of Transformer-based models in retail forecasting and emphasize the importance of integrating domain-specific explanatory variables to achieve more accurate, context-aware predictions in dynamic retail environments.

2025

Deep Learning-Driven Integration of Multimodal Data for Material Property Predictions

Autores
Costa, V; Oliveira, JM; Ramos, P;

Publicação
COMPUTATION

Abstract
Advancements in deep learning have revolutionized materials discovery by enabling predictive modeling of complex material properties. However, single-modal approaches often fail to capture the intricate interplay of compositional, structural, and morphological characteristics. This study introduces a novel multimodal deep learning framework for enhanced material property prediction, integrating textual (chemical compositions), tabular (structural descriptors), and image-based (2D crystal structure visualizations) modalities. Utilizing the Alexandriadatabase, we construct a comprehensive multimodal dataset of 10,000 materials with symmetry-resolved crystallographic data. Specialized neural architectures, such as FT-Transformer for tabular data, Hugging Face Electra-based model for text, and TIMM-based MetaFormer for images, generate modality-specific embeddings, fused through a hybrid strategy into a unified latent space. The framework predicts seven critical material properties, including electronic (band gap, density of states), thermodynamic (formation energy, energy above hull, total energy), magnetic (magnetic moment per volume), and volumetric (volume per atom) features, many governed by crystallographic symmetry. Experimental results demonstrated that multimodal fusion significantly outperforms unimodal baselines. Notably, the bimodal integration of image and text data showed significant gains, reducing the Mean Absolute Error for band gap by approximately 22.7% and for volume per atom by 22.4% compared to the average unimodal models. This combination also achieved a 28.4% reduction in Root Mean Squared Error for formation energy. The full trimodal model (tabular + images + text) yielded competitive, and in several cases the lowest, error metrics, particularly for band gap, magnetic moment per volume and density of states per atom, confirming the value of integrating all three modalities. This scalable, modular framework advances materials informatics, offering a powerful tool for data-driven materials discovery and design.

2024

Enhancing Hierarchical Sales Forecasting with Promotional Data: A Comparative Study Using ARIMA and Deep Neural Networks

Autores
Teixeira, M; Oliveira, JM; Ramos, P;

Publicação
MACHINE LEARNING AND KNOWLEDGE EXTRACTION

Abstract
Retailers depend on accurate sales forecasts to effectively plan operations and manage supply chains. These forecasts are needed across various levels of aggregation, making hierarchical forecasting methods essential for the retail industry. As competition intensifies, the use of promotions has become a widespread strategy, significantly impacting consumer purchasing behavior. This study seeks to improve forecast accuracy by incorporating promotional data into hierarchical forecasting models. Using a sales dataset from a major Portuguese retailer, base forecasts are generated for different hierarchical levels using ARIMA models and Multi-Layer Perceptron (MLP) neural networks. Reconciliation methods including bottom-up, top-down, and optimal reconciliation with OLS and WLS (struct) estimators are employed. The results show that MLPs outperform ARIMA models for forecast horizons longer than one day. While the addition of regressors enhances ARIMA's accuracy, it does not yield similar improvements for MLP. MLPs present a compelling balance of simplicity and efficiency, outperforming ARIMA in flexibility while offering faster training times and lower computational demands compared to more complex deep learning models, making them highly suitable for practical retail forecasting applications.