2025
Authors
Gallan, S; Alkire, L; Teixeira, JG; Heinonen, K; Fisk, P;
Publication
AMS Review
Abstract
Amidst an urgent need for sustainability, novel approaches are required to address environmental challenges. In this context, biomimicry offers a promising logic for catalyzing nature’s wisdom to address this complexity. The purpose of this research is to (1) establish a biomimetic understanding and vocabulary for sustainability and (2) apply biomimicry to upframe service ecosystems as a foundation for sustainability. Our research question is: How can the principles of natural ecosystems inform and enhance the sustainability of service ecosystems? The findings highlight upframed service ecosystems as embodying a set of practices that (1) promote mutualistic interactions, (2) build on local biotic and abiotic components supporting emergence processes, (3) leverage (bio)diversity to build resilience, (4) foster resource sharing for regeneration, and (5) bridge individual roles to optimize the community rather than individual well-being. Our upframed definition of a service ecosystem is a system of resource-integrating biotic actors and abiotic resources functioning according to ecocentric principles for mutualistic and regenerative value creation. The discussion emphasizes the implications of this upframed definition for sustainability practices, advocating for a shift in understanding and interacting with service ecosystems. It emphasizes the potential for immediate mutualistic benefits and long-term regenerative impacts. © Academy of Marketing Science 2025.
2025
Authors
Lopes, FL; Mangussi, AD; Pereira, RC; Santos, MS; Abreu, PH; Lorena, AC;
Publication
IEEE Access
Abstract
Missing data is a common challenge in real-world datasets and can arise for various reasons. This has led to the classification of missing data mechanisms as missing completely at random, missing at random, or missing not at random. Currently, the literature offers various algorithms for imputing missing data, each with advantages tailored to specific mechanisms and levels of missingness. This paper introduces a novel approach to missing data imputation using the well-established label propagation algorithm, named Label Propagation for Missing Data Imputation (LPMD). The method combines, weighs, and propagates known feature values to impute missing data. Experiments on benchmark datasets highlight its effectiveness across various missing data scenarios, demonstrating more stable results compared to baseline methods under different missingness mechanisms and levels. The algorithms were evaluated based on processing time, imputation quality (measured by mean absolute error), and impact on classification performance. A variant of the algorithm (LPMD2) generally achieved the fastest processing time compared to other five imputation algorithms from the literature, with speed-ups ranging from 0.7 to 23 times. The results of LPMD were also stable regarding the mean absolute error of the imputed values compared to their original counterparts, for different missing data mechanisms and rates of missing values. In real applications, missingness can behave according to different and unknown mechanisms, so an imputation algorithm that behaves stably for different mechanisms is advantageous. The results regarding ML models produced using the imputed datasets were also comparable to the baselines. © 2013 IEEE.
2025
Authors
Rodrigues, EM; Baghoussi, Y; Mendes Moreira, J;
Publication
EXPERT SYSTEMS
Abstract
Deep learning models are widely used in multivariate time series forecasting, yet, they have high computational costs. One way to reduce this cost is by reducing data dimensionality, which involves removing unimportant or low importance information with the proper method. This work presents a study on an explainability feature selection framework composed of four methods (IMV-LSTM Tensor, LIME-LSTM, Average SHAP-LSTM, and Instance SHAP-LSTM) aimed at using the LSTM black-box model complexity to its favour, with the end goal of improving the error metrics and reducing the computational cost on a forecast task. To test the framework, three datasets with a total of 101 multivariate time series were used, with the explainability methods outperforming the baseline methods in most of the data, be it in error metrics or computation time for the LSTM model training.
2025
Authors
Nogueira, AFR; Oliveira, HP; Teixeira, LF;
Publication
IMAGE AND VISION COMPUTING
Abstract
3D human pose estimation aims to reconstruct the human skeleton of all the individuals in a scene by detecting several body joints. The creation of accurate and efficient methods is required for several real-world applications including animation, human-robot interaction, surveillance systems or sports, among many others. However, several obstacles such as occlusions, random camera perspectives, or the scarcity of 3D labelled data, have been hampering the models' performance and limiting their deployment in real-world scenarios. The higher availability of cameras has led researchers to explore multi-view solutions due to the advantage of being able to exploit different perspectives to reconstruct the pose. Most existing reviews focus mainly on monocular 3D human pose estimation and a comprehensive survey only on multi-view approaches to determine the 3D pose has been missing since 2012. Thus, the goal of this survey is to fill that gap and present an overview of the methodologies related to 3D pose estimation in multi-view settings, understand what were the strategies found to address the various challenges and also, identify their limitations. According to the reviewed articles, it was possible to find that most methods are fully-supervised approaches based on geometric constraints. Nonetheless, most of the methods suffer from 2D pose mismatches, to which the incorporation of temporal consistency and depth information have been suggested to reduce the impact of this limitation, besides working directly with 3D features can completely surpass this problem but at the expense of higher computational complexity. Models with lower supervision levels were identified to overcome some of the issues related to 3D pose, particularly the scarcity of labelled datasets. Therefore, no method is yet capable of solving all the challenges associated with the reconstruction of the 3D pose. Due to the existing trade-off between complexity and performance, the best method depends on the application scenario. Therefore, further research is still required to develop an approach capable of quickly inferring a highly accurate 3D pose with bearable computation cost. To this goal, techniques such as active learning, methods that learn with a low level of supervision, the incorporation of temporal consistency, view selection, estimation of depth information and multi-modal approaches might be interesting strategies to keep in mind when developing a new methodology to solve this task.
2025
Authors
Rincon, AM; Rizzo Vincenzi, AM; Faria, JP;
Publication
IEEE International Conference on Software Testing, Verification and Validation, ICST 2025 - Workshops, Naples, Italy, March 31 - April 4, 2025
Abstract
This study explores prompt engineering for automated white-box integration testing of RESTful APIs using Large Language Models (LLMs). Four versions of prompts were designed and tested across three OpenAI models (GPT-3.5 Turbo, GPT-4 Turbo, and GPT-4o) to assess their impact on code coverage, token consumption, execution time, and financial cost. The results indicate that different prompt versions, especially with more advanced models, achieved up to 90% coverage, although at higher costs. Additionally, combining test sets from different models increased coverage, reaching 96% in some cases. We also compared the results with EvoMaster, a specialized tool for generating tests for REST APIs, where LLM-generated tests achieved comparable or higher coverage in the benchmark projects. Despite higher execution costs, LLMs demonstrated superior adaptability and flexibility in test generation. © 2025 IEEE.
2025
Authors
Mazarei, A; Sousa, R; Mendes Moreira, J; Molchanov, S; Ferreira, HM;
Publication
INTERNATIONAL JOURNAL OF DATA SCIENCE AND ANALYTICS
Abstract
Outlier detection is a widely used technique for identifying anomalous or exceptional events across various contexts. It has proven to be valuable in applications like fault detection, fraud detection, and real-time monitoring systems. Detecting outliers in real time is crucial in several industries, such as financial fraud detection and quality control in manufacturing processes. In the context of big data, the amount of data generated is enormous, and traditional batch mode methods are not practical since the entire dataset is not available. The limited computational resources further compound this issue. Boxplot is a widely used batch mode algorithm for outlier detection that involves several derivations. However, the lack of an incremental closed form for statistical calculations during boxplot construction poses considerable challenges for its application within the realm of big data. We propose an incremental/online version of the boxplot algorithm to address these challenges. Our proposed algorithm is based on an approximation approach that involves numerical integration of the histogram and calculation of the cumulative distribution function. This approach is independent of the dataset's distribution, making it effective for all types of distributions, whether skewed or not. To assess the efficacy of the proposed algorithm, we conducted tests using simulated datasets featuring varying degrees of skewness. Additionally, we applied the algorithm to a real-world dataset concerning software fault detection, which posed a considerable challenge. The experimental results underscored the robust performance of our proposed algorithm, highlighting its efficacy comparable to batch mode methods that access the entire dataset. Our online boxplot method, leveraging dataset distribution to define whiskers, consistently achieved exceptional outlier detection results. Notably, our algorithm demonstrated computational efficiency, maintaining constant memory usage with minimal hyperparameter tuning.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.