2025
Autores
Penelas, G; Barbosa, L; Reis, A; Barroso, J; Pinto, T;
Publicação
ALGORITHMS
Abstract
In the field of gaming artificial intelligence, selecting the appropriate machine learning approach is essential for improving decision-making and automation. This paper examines the effectiveness of deep reinforcement learning (DRL) within interactive gaming environments, focusing on complex decision-making tasks. Utilizing the Unity engine, we conducted experiments to evaluate DRL methodologies in simulating realistic and adaptive agent behavior. A vehicle driving game is implemented, in which the goal is to reach a certain target within a small number of steps, while respecting the boundaries of the roads. Our study compares Proximal Policy Optimization (PPO) and Soft Actor-Critic (SAC) in terms of learning efficiency, decision-making accuracy, and adaptability. The results demonstrate that PPO successfully learns to reach the target, achieving higher and more stable cumulative rewards. Conversely, SAC struggles to reach the target, displaying significant variability and lower performance. These findings highlight the effectiveness of PPO in this context and indicate the need for further development, adaptation, and tuning of SAC. This research contributes to developing innovative approaches in how ML can improve how player agents adapt and react to their environments, thereby enhancing realism and dynamics in gaming experiences. Additionally, this work emphasizes the utility of using games to evolve such models, preparing them for real-world applications, namely in the field of vehicles' autonomous driving and optimal route calculation.
2025
Autores
Santos, R; Pedrosa, J; Mendonça, AM; Campilho, A;
Publicação
COMPUTER VISION AND IMAGE UNDERSTANDING
Abstract
The increase in complexity of deep learning models demands explanations that can be obtained with methods like Grad-CAM. This method computes an importance map for the last convolutional layer relative to a specific class, which is then upsampled to match the size of the input. However, this final step assumes that there is a spatial correspondence between the last feature map and the input, which may not be the case. We hypothesize that, for models with large receptive fields, the feature spatial organization is not kept during the forward pass, which may render the explanations devoid of meaning. To test this hypothesis, common architectures were applied to a medical scenario on the public VinDr-CXR dataset, to a subset of ImageNet and to datasets derived from MNIST. The results show a significant dispersion of the spatial information, which goes against the assumption of Grad-CAM, and that explainability maps are affected by this dispersion. Furthermore, we discuss several other caveats regarding Grad-CAM, such as feature map rectification, empty maps and the impact of global average pooling or flatten layers. Altogether, this work addresses some key limitations of Grad-CAM which may go unnoticed for common users, taking one step further in the pursuit for more reliable explainability methods.
2025
Autores
Russo, N; Mamede, HS; Reis, L;
Publicação
TECHNOLOGIES
Abstract
Business Continuity Management (BCM) is critical for organizations to mitigate disruptions and maintain operations, yet many struggle with fragmented and non-standardized self-assessment tools. Existing frameworks often lack holistic integration, focusing narrowly on isolated components like cyber resilience or risk management, which limits their ability to evaluate BCM maturity comprehensively. This research addresses this gap by proposing a structured Self-Assessment System designed to unify BCM components into an adaptable, standards-aligned methodology. Grounded in Design Science Research, the system integrates a BCM Model comprising eight components and 118 activities, each evaluated through weighted questions to quantify organizational preparedness. The methodology enables organizations to conduct rapid as-is assessments using a 0-100 scoring mechanism with visual indicators (red/yellow/green), benchmark progress over time and against peers, and align with international standards (e.g., ISO 22301, ITIL) while accommodating unique organizational constraints. Demonstrated via focus groups and semi-structured interviews with 10 organizations, the system proved effective in enhancing top management commitment, prioritizing resource allocation, and streamlining BCM implementation-particularly for SMEs with limited resources. Key contributions include a reusable self-assessment tool adaptable to any BCM framework, empirical validation of its utility in identifying weaknesses and guiding continuous improvement, and a pathway from initial assessment to advanced measurement via the Plan-Do-Check-Act cycle. By bridging the gap between theoretical standards and practical application, this research offers a scalable solution for organizations to systematically evaluate and improve BCM resilience.
2025
Autores
Accinelli, E; Afsar, A; Martins, F; Martins, J; Oliveira, BMPM; Oviedo, J; Pinto, AA; Quintas, L;
Publicação
MATHEMATICAL METHODS IN THE APPLIED SCIENCES
Abstract
This paper fits in the theory of international agreements by studying the success of stable coalitions of agents seeking the preservation of a public good. Extending Baliga and Maskin, we consider a model of N homogeneous agents with quasi-linear utilities of the form u(j) (r(j); r) = r(alpha) - r(j), where r is the aggregate contribution and the exponent alpha is the elasticity of the gross utility. When the value of the elasticity alpha increases in its natural range (0, 1), we prove the following five main results in the formation of stable coalitions: (i) the gap of cooperation, characterized as the ratio of the welfare of the grand coalition to the welfare of the competitive singleton coalition grows to infinity, which we interpret as a measure of the urge or need to save the public good; (ii) the size of stable coalitions increases from 1 up to N; (iii) the ratio of the welfare of stable coalitions to the welfare of the competitive singleton coalition grows to infinity; (iv) the ratio of the welfare of stable coalitions to the welfare of the grand coalition decreases (a lot), up to when the number of members of the stable coalition is approximately N/e and after that it increases (a lot); and (v) the growth of stable coalitions occurs with a much greater loss of the coalition members when compared with free-riders. Result (v) has two major drawbacks: (a) A priori, it is difficult to convince agents to be members of the stable coalition and (b) together with results (i) and (iv), it explains and leads to the pessimistic Barrett's paradox of cooperation, even in a case not much considered in the literature: The ratio of the welfare of the stable coalitions against the welfare of the grand coalition is small, even in the extreme case where there are few (or a single) free-riders and the gap of cooperation is large. Optimistically, result (iii) shows that stable coalitions do much better than the competitive singleton coalition. Furthermore, result (ii) proves that the paradox of cooperation is resolved for larger values of.. so that the grand coalition is stabilized.
2025
Autores
Piardi, L; de Oliveira, AS; Costa, P; Leitao, P;
Publicação
COMPUTERS IN INDUSTRY
Abstract
In the era of Industry 4.0, fault tolerance is essential for maintaining the robustness and resilience of industrial systems facing unforeseen or undesirable disturbances. Current methodologies for fault tolerance stages namely, detection, diagnosis, and recovery, do not correspond with the accelerated technological evolution pace over the past two decades. Driven by the advent of digital technologies such as Internet of Things, cloud and edge computing, and artificial intelligence, associated with enhanced computational processing and communication capabilities, local or monolithic centralized fault tolerance methodologies are out of sync with contemporary and future systems. Consequently, these methodologies are limited in achieving the maximum benefits enabled by the integration of these technologies, such as accuracy and performance improvements. Accordingly, in this paper, a collaborative fault tolerance methodology for cyber-physical systems, named Collaborative Fault * (CF*), is proposed. The proposed methodology takes advantage of the inherent data analysis and communication capabilities of cyber-physical components. The proposed methodology is based on multi-agent system principles, where key components are self-fault tolerant, and adopts collaborative and distributed intelligence behavior when necessary to improve its fault tolerance capabilities. Experiments were conducted focusing on the fault detection stage for temperature and humidity sensors in warehouse racks. The experimental results confirmed the accuracy and performance improvements under CF* compared with the local methodology and competitiveness when compared with a centralized approach.
2025
Autores
Campos, R; Jorge, M; Jatowt, A; Bhatia, S; Litvak, M;
Publicação
CEUR Workshop Proceedings
Abstract
[No abstract available]
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.