Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Miguel Gonçalves Areias

2020

Message from the General Chairs: SBAC-PAD 2020

Autores
Areias, M; Barbosa, J; Dutra, I;

Publicação
Proceedings - Symposium on Computer Architecture and High Performance Computing

Abstract

2021

Preface

Autores
Rocha, R; Formisano, A; Liu, YA; Areias, M; Angelopoulos, N; Bogaerts, B; Dodaro, C; Alviano, M; Brik, A; Vennekens, J; Pozzato, GL; Zhou, NF; Dahl, V; Fodor, P;

Publicação
Electronic Proceedings in Theoretical Computer Science, EPTCS

Abstract

2022

On the correctness of a lock-free compression-based elastic mechanism for a hash trie design

Autores
Areias, M; Rocha, R;

Publicação
COMPUTING

Abstract
A key aspect of any hash map design is the problem of dynamically resizing it in order to deal with hash collisions. Compression in tree-based hash maps is the ability of reducing the depth of the internal hash levels that support the hash map. In this context, elasticity refers to the ability of automatically resizing the internal data structures that support the hash map operations in order to meet varying workloads, thus optimizing the overall memory consumption of the hash map. This work extends a previous lock-free hash trie map design to support elastic hashing, i.e., expand saturated hash levels and compress unused hash levels, such that, at each point in time, the number of levels in a path is adjusted, as closely as possible, to the set of keys that is stored in the data structure. To materialize our design, we introduce a new compress operation for hash levels, which requires redesigning the existing search, insert, remove and expand operations in order to maintain the lock-freedom property of the data structure. Experimental results show that elasticity effectively improves the search operation and, in doing so, our design becomes very competitive when compared to other state-of-the-art designs implemented in Java.

2025

Large Language Model Framework for Log Sequence Anomaly Detection

Autores
Reis, J; Areias, M; Barbosa, JG;

Publicação
Progress in Artificial Intelligence - 24th EPIA Conference on Artificial Intelligence, EPIA 2025, Faro, Portugal, October 1-3, 2025, Proceedings, Part I

Abstract
Log analysis is fundamental to modern software observability systems, playing a key role in improving system reliability. Recently, there has been a growing adoption of Large Language Models (LLMs) for log anomaly detection, due to their ability to learn complex patterns. In this work, we propose a model-agnostic framework that allows seamless plug-and-play integration of different LLMs, making it easy to experiment with and select the model that fits specific needs. These models are first fine-tuned on normal log data, learning their patterns. During inference, the model predicts the most probable next tokens based on the preceding context in each sequence. Anomaly detection is performed using Top-K predictions, where sequences are flagged as anomalous if the actual log entry does not appear among the K most probable next tokens, with K determined using the validation dataset. The proposed framework is evaluated on three widely-used benchmark datasets—HDFS, BGL, and Thunderbird—where it consistently achieves competitive results, outperforming state-of-the-art methods in multiple scenarios. These results highlight the effectiveness of LLM-based log analysis and the importance of flexibility when selecting models for specific operational contexts. © 2025 Elsevier B.V., All rights reserved.

2025

A sleek lock-free hash map in an ERA of safe memory reclamation methods

Autores
Moreno, P; Areias, M; Rocha, R;

Publicação
PARALLEL COMPUTING

Abstract
Lock-free data structures have become increasingly significant due to their algorithmic advantages in multi-core cache-based architectures. Safe Memory Reclamation (SMR) is a technique used in concurrent programming to ensure that memory can be safely reclaimed without causing data corruption, dangling pointers, or access to freed memory. The ERA theorem states that any SMR method for concurrent data structures can only provide at most two of the three main desirable properties: Ease of use, Robustness, and Applicability. This fundamental trade-off influences the design of efficient lock-free data structures at an early stage. This work redesigns a previous lock-free hash map to fully exploit the properties of the ERA theorem and to leverage the characteristics of multi-core cache-based architectures by minimizing the number of cache misses, which are a significant bottleneck in multi-core environments. Experimental results show that our design outperforms the previous design, which was already quite competitive when compared against the Concurrent Hash Map design of the Intel's TBB library.

2025

Performance Evaluation of Separate Chaining for Concurrent Hash Maps

Autores
Castro, A; Areias, M; Rocha, R;

Publicação
MATHEMATICS

Abstract
Hash maps are a widely used and efficient data structure for storing and accessing data organized as key-value pairs. Multithreading with hash maps refers to the ability to concurrently execute multiple lookup, insert, and delete operations, such that each operation runs independently while sharing the underlying data structure. One of the main challenges in hash map implementation is the management of collisions. Arguably, separate chaining is among the most well-known strategies for collision resolution. In this paper, we present a comprehensive study comparing two common approaches to implementing separate chaining-linked lists and dynamic arrays-in a multithreaded environment using a lock-based concurrent hash map design. Our study includes a performance evaluation covering parameters such as cache behavior, energy consumption, contention under concurrent access, and resizing overhead. Experimental results show that dynamic arrays maintain more predictable memory access and lower energy consumption in multithreaded environments.

  • 4
  • 6