2025
Autores
Oliveira, B; Oliveira, Ó; Peixoto, T; Ribeiro, F; Pereira, C;
Publicação
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Abstract
Industry 4.0 promotes a paradigm shift in the orchestration, oversight, and optimization of value chains across product and service life cycles. For instance, leveraging large-scale data from sensors and devices, coupled with Machine Learning techniques can enhance decision-making and facilitate various improvements in industrial settings, including predictive maintenance. However, ensuring data quality remains a significant challenge. Malfunctions in sensors or external factors such as electromagnetic interference have the potential to compromise data accuracy, thereby undermining confidence in related systems. Neglecting data quality not only compromises system outputs but also contributes to the proliferation of bad data, such as data duplication, inconsistencies, or inaccuracies. To consider these problems is crucial to fully explore the potential of data in Industry 4.0. This paper introduces an extensible system designed to ingest, organize, and monitor data generated by various sources, focusing on industrial settings. This system can serve as a foundation for enhancing intelligent processes and optimizing operations in smart manufacturing environments. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
2025
Autores
Moreira, AC; da Costa, RA; de Sousa, MJN;
Publicação
JOURNAL OF HOSPITALITY & TOURISM RESEARCH
Abstract
As storytelling influences consumer attitudes and opinions, conditioning the tourist experience by appealing to the imagination, this paper reviews the literature covering the analysis of 66 papers that focus on the storytelling of the visitor/tourist as the main subject. The article is divided into four main themes: (a) storytelling as a tool to attract tourists; (b) the role of the storyteller; (c) the tourist as a storyteller; and (d) what makes a good story. The Hoshin Kanri Matrix was used to showcase each of the main themes. Although storytelling has been widely used to attract tourists, it is crucial that tourist-based storytelling can be a credible substitute for destination-based storytelling, as empathy, authenticity and the emotional attachment of tourists as storytellers play an important role as good stories, transforming and co-creating their experiences that emerge from the interaction of tourists, residents, and intermediaries.
2025
Autores
Sentinelo, T; Queiros, M; Oliveira, JM; Ramos, P;
Publicação
ECONOMIES
Abstract
This study explores the applicability of the Laffer Curve in the context of the European Union (EU) by analyzing the relationship between taxation and fiscal revenue across personal income tax (PIT), corporate income tax (CIT), and value-added tax (VAT). Utilizing a comprehensive panel data set spanning 1995 to 2022 across all 27 EU member states, the research also integrates the Bird Index to assess fiscal effort and employs advanced econometric techniques, including the Hausman Test and log-quadratic regression models, to capture the non-linear dynamics of the Laffer Curve. The findings reveal that excessively high tax rates, particularly in some larger member states, may lead to revenue losses due to reduced economic activity and tax evasion, highlighting the existence of optimal tax rates that maximize revenue while sustaining economic growth. By estimating threshold tax rates and incorporating the Bird Index, the study provides a nuanced perspective on tax efficiency and fiscal sustainability, offering evidence-based policy recommendations for optimizing tax systems in the European Union to balance revenue generation with economic competitiveness.
2025
Autores
Souadda, LI; Halitim, AR; Benilles, B; Oliveira, JM; Ramos, P;
Publicação
FORECASTING
Abstract
Hyperparameter optimization (HPO) is critical for enhancing the predictive performance of machine learning models in credit risk assessment for peer-to-peer (P2P) lending. This study evaluates four HPO methods, Grid Search, Random Search, Hyperopt, and Optuna, across four models, Logistic Regression, Random Forest, XGBoost, and LightGBM, using three real-world datasets (Lending Club, Australia, Taiwan). We assess predictive accuracy (AUC, Sensitivity, Specificity, G-Mean), computational efficiency, robustness, and interpretability. LightGBM achieves the highest AUC (e.g., 70.77% on Lending Club, 93.25% on Australia, 77.85% on Taiwan), with XGBoost performing comparably. Bayesian methods (Hyperopt, Optuna) match or approach Grid Search's accuracy while reducing runtime by up to 75.7-fold (e.g., 3.19 vs. 241.47 min for LightGBM on Lending Club). A sensitivity analysis confirms robust hyperparameter configurations, with AUC variations typically below 0.4% under +/- 10% perturbations. A feature importance analysis, using gain and SHAP metrics, identifies debt-to-income ratio and employment title as key default predictors, with stable rankings (Spearman correlation > 0.95, p<0.01) across tuning methods, enhancing model interpretability. Operational impact depends on data quality, scalable infrastructure, fairness audits for features like employment title, and stakeholder collaboration to ensure compliance with regulations like the EU AI Act and U.S. Equal Credit Opportunity Act. These findings advocate Bayesian HPO and ensemble models in P2P lending, offering scalable, transparent, and fair solutions for default prediction, with future research suggested to explore advanced resampling, cost-sensitive metrics, and feature interactions.
2025
Autores
Caetano, R; Oliveira, JM; Ramos, P;
Publicação
MATHEMATICS
Abstract
Accurate demand forecasting is essential for retail operations as it directly impacts supply chain efficiency, inventory management, and financial performance. However, forecasting retail time series presents significant challenges due to their irregular patterns, hierarchical structures, and strong dependence on external factors such as promotions, pricing strategies, and socio-economic conditions. This study evaluates the effectiveness of Transformer-based architectures, specifically Vanilla Transformer, Informer, Autoformer, ETSformer, NSTransformer, and Reformer, for probabilistic time series forecasting in retail. A key focus is the integration of explanatory variables, such as calendar-related indicators, selling prices, and socio-economic factors, which play a crucial role in capturing demand fluctuations. This study assesses how incorporating these variables enhances forecast accuracy, addressing a research gap in the comprehensive evaluation of explanatory variables within multiple Transformer-based models. Empirical results, based on the M5 dataset, show that incorporating explanatory variables generally improves forecasting performance. Models leveraging these variables achieve up to 12.4% reduction in Normalized Root Mean Squared Error (NRMSE) and 2.9% improvement in Mean Absolute Scaled Error (MASE) compared to models that rely solely on past sales. Furthermore, probabilistic forecasting enhances decision making by quantifying uncertainty, providing more reliable demand predictions for risk management. These findings underscore the effectiveness of Transformer-based models in retail forecasting and emphasize the importance of integrating domain-specific explanatory variables to achieve more accurate, context-aware predictions in dynamic retail environments.
2025
Autores
Costa, V; Oliveira, JM; Ramos, P;
Publicação
COMPUTATION
Abstract
Advancements in deep learning have revolutionized materials discovery by enabling predictive modeling of complex material properties. However, single-modal approaches often fail to capture the intricate interplay of compositional, structural, and morphological characteristics. This study introduces a novel multimodal deep learning framework for enhanced material property prediction, integrating textual (chemical compositions), tabular (structural descriptors), and image-based (2D crystal structure visualizations) modalities. Utilizing the Alexandriadatabase, we construct a comprehensive multimodal dataset of 10,000 materials with symmetry-resolved crystallographic data. Specialized neural architectures, such as FT-Transformer for tabular data, Hugging Face Electra-based model for text, and TIMM-based MetaFormer for images, generate modality-specific embeddings, fused through a hybrid strategy into a unified latent space. The framework predicts seven critical material properties, including electronic (band gap, density of states), thermodynamic (formation energy, energy above hull, total energy), magnetic (magnetic moment per volume), and volumetric (volume per atom) features, many governed by crystallographic symmetry. Experimental results demonstrated that multimodal fusion significantly outperforms unimodal baselines. Notably, the bimodal integration of image and text data showed significant gains, reducing the Mean Absolute Error for band gap by approximately 22.7% and for volume per atom by 22.4% compared to the average unimodal models. This combination also achieved a 28.4% reduction in Root Mean Squared Error for formation energy. The full trimodal model (tabular + images + text) yielded competitive, and in several cases the lowest, error metrics, particularly for band gap, magnetic moment per volume and density of states per atom, confirming the value of integrating all three modalities. This scalable, modular framework advances materials informatics, offering a powerful tool for data-driven materials discovery and design.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.