2024
Authors
Vazquez Noguerol, M; Comesaña Benavides, JA; Prado Prado, JC; Amorim, P;
Publication
EUROPEAN JOURNAL OF INNOVATION MANAGEMENT
Abstract
PurposeDisruptions are appearing more frequently and having an ever greater impact on supply chains (SC), affecting the vulnerability and sustainability of organisations. Our study proposes an innovative approach to address contemporary challenges by introducing coopetition as a strategic capability. The aim of this study is to enable companies to adapt and thrive by applying a tool that measures and monitors different logistical scenarios to improve performance and antifragility.Design/methodology/approachWith the aim of jointly planning transport activities of two competing companies, we present a linear programming model that promotes synergies which enhance resource utilisation. To demonstrate the validity of the model, a case study is conducted to measure, monitor and evaluate the results obtained after collaborating on SC activities.FindingsCurrent tools to support logistics planning are not effective because they hamper information exchange, cost allocation and performance measurements. Our innovative model optimises collaborative networks (CNs) and monitors economic, environmental and social improvements. The case study shows the reduction of logistics costs (13%), carbon footprint (37%) and the improvement of social antifragility when agility and flexibility emerge.Originality/valueCNs have become an effective means of enhancing resilience, but there are no empirical contributions to demonstrate how to achieve this. We provide a real case with computational experiments that provide empirical evidence of the effectiveness of the model, which measures, optimises and evaluates SC performance in coopetitive environments. This approach is a guide to researchers and practitioners when creating simulations to reduce risks and facilitate decision-making.
2024
Authors
Neto, PC; Mamede, RM; Albuquerque, C; Gonçalves, T; Sequeira, AF;
Publication
2024 IEEE 18TH INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION, FG 2024
Abstract
Face recognition applications have grown in parallel with the size of datasets, complexity of deep learning models and computational power. However, while deep learning models evolve to become more capable and computational power keeps increasing, the datasets available are being retracted and removed from public access. Privacy and ethical concerns are relevant topics within these domains. Through generative artificial intelligence, researchers have put efforts into the development of completely synthetic datasets that can be used to train face recognition systems. Nonetheless, the recent advances have not been sufficient to achieve performance comparable to the state-of-the-art models trained on real data. To study the drift between the performance of models trained on real and synthetic datasets, we leverage a massive attribute classifier (MAC) to create annotations for four datasets: two real and two synthetic. From these annotations, we conduct studies on the distribution of each attribute within all four datasets. Additionally, we further inspect the differences between real and synthetic datasets on the attribute set. When comparing through the Kullback-Leibler divergence we have found differences between real and synthetic samples. Interestingly enough, we have verified that while real samples suffice to explain the synthetic distribution, the opposite could not be further from being true.
2024
Authors
García-Méndez, S; Leal, F; de Arriba-Pérez, F; Malheiro, B; Burguillo-Rial, JC;
Publication
INFORMATION SYSTEMS AND TECHNOLOGIES, VOL 1, WORLDCIST 2023
Abstract
Web 2.0 platforms, like wikis and social networks, rely on crowdsourced data and, as such, are prone to data manipulation by illintended contributors. This research proposes the transparent identification of wiki manipulators through the classification of contributors as benevolent or malevolent humans or bots, together with the explanation of the attributed class labels. The system comprises: (i) stream-based data pre-processing; (ii) incremental profiling; and (iii) online classification, evaluation and explanation. Particularly, the system profiles contributors and contributions by combining features directly collected with content- and side-based engineered features. The experimental results obtained with a real data set collected from Wikivoyage - a popular travel wiki - attained a 98.52% classification accuracy and 91.34% macro F-measure. In the end, this work seeks to address data reliability to prevent information detrimental and manipulation.
2024
Authors
Monteiro, V; Moreira, C; Lopes, JAP; Antunes, CH; Osório, GJ; Cataláo, JPS; Afonso, JL;
Publication
IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS
Abstract
The decarbonization of the economy and the increasing integration of renewable energy sources into the generation mix are bringing new challenges, requiring novel technological solutions in the topic of smart grids, which include smart transformers and energy storage systems. Additionally, power quality is a vital concern for the future smart grids; therefore, the continuous development of power electronics solutions to overcome power quality problems is of the utmost importance. In this context, this article proposes a novel three-phase multiobjective unified power quality conditioner (MO-UPQC), considering interfaces for solar PV panels and for energy storage in batteries. The MO-UPQC is capable of compensating power quality problems in the voltages (at the load side) and in the currents (at the power grid side), while it enables injecting power into the grid (from the PV panels or batteries) or charging the batteries (from the PV panels or from the grid). Experimental results were obtained with a three-phase four-wire laboratory prototype, demonstrating the feasibility and the large range of applications of the proposed MO-UPQC.
2024
Authors
Mendes, C; Pereira, R; Frazao, L; Ribeiro, JC; Rodrigues, N; Costa, N; Barroso, J; Pereira, AMJ;
Publication
PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON SOFTWARE DEVELOPMENT AND TECHNOLOGIES FOR ENHANCING ACCESSIBILITY AND FIGHTING INFO-EXCLUSION, DSAI 2024
Abstract
This paper proposes an Artificial Intelligence (AI) driven solution, Chatto, designed for emotional support among older adults. It integrates emotion recognition, Natural Language Processing (NLP), and human-computer interaction (HCI) to facilitate meaningful interactions and aid in self-emotion regulation while providing caregivers with tools to monitor and support the elder's emotional state remotely. The proposal includes an infrastructure to personalize the system through a human labeling approach and retraining of the deep learning models. The findings revealed the solution's impact on the emotional well-being of the elderly and identified potential improvements in emotion detection, conversational features, and user interface. These improvements were based on feedback from feasibility and usability tests conducted with caregivers and older adults subject to the influence of demographic variables, such as age, cultural background, and technological literacy.
2024
Authors
Peixoto, E; Carneiro, D; Torres, D; Silva, B; Novais, P;
Publication
Ambient Intelligence - Software and Applications - 15th International Symposium on Ambient Intelligence, ISAmI 2024, Salamanca, Spain, 26-28 June 2024.
Abstract
Many of today’s domains of application of Machine Learning (ML) are dynamic in the sense that data and their patterns change over time. This has a significant impact in the ML lifecycle and operations, requiring frequent model (re-)training, or other strategies to deal with outdated models and data. This need for dynamic and responsive solutions also has an impact on the use of computational resources and, consequently, on sustainability indicators. This paper proposes an approach in line with the concept of Frugal AI, whose main aim is to minimize the resources and time spent on training models by re-using models from a pool of past models, when appropriate. Specifically, we present and validate a methodology for similarity-based model selection in data streaming environments with concept drift. Rather than training a new model for each new block of data, this methodology considers a pool with only a subset of the models and, for each new block of data, will select the best model from the pool. The best model is determined based on the distance between its training data and the current block of data. Distance is calculated based on a set of meta-features that characterizes the data, and on the Bray-Curtis distance. We show that it is possible to reuse previous models using this methodology, leading to potentially significant saving of resources and time, while maintaining predictive quality. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.