Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por CRACS

2024

On the Use of VGs for Feature Selection in Supervised Machine Learning - A Use Case to Detect Distributed DoS Attacks

Autores
Lopes, J; Partida, A; Pinto, P; Pinto, A;

Publicação
OPTIMIZATION, LEARNING ALGORITHMS AND APPLICATIONS, PT I, OL2A 2023

Abstract
Information systems depend on security mechanisms to detect and respond to cyber-attacks. One of the most frequent attacks is the Distributed Denial of Service (DDoS): it impairs the performance of systems and, in the worst case, leads to prolonged periods of downtime that prevent business processes from running normally. To detect this attack, several supervised Machine Learning (ML) algorithms have been developed and companies use them to protect their servers. A key stage in these algorithms is feature pre-processing, in which, input data features are assessed and selected to obtain the best results in the subsequent stages that are required to implement supervised ML algorithms. In this article, an innovative approach for feature selection is proposed: the use of Visibility Graphs (VGs) to select features for supervised machine learning algorithms used to detect distributed DoS attacks. The results show that VG can be quickly implemented and can compete with other methods to select ML features, as they require low computational resources and they offer satisfactory results, at least in our example based on the early detection of distributed DoS. The size of the processed data appears as the main implementation constraint for this novel feature selection method.

2024

Rethinking negative sampling in content-based news recommendation

Autores
Rebelo, MA; Vinagre, J; Pereira, I; Figueira, A;

Publicação
CoRR

Abstract

2024

Comparing Semantic Graph Representations of Source Code: The Case of Automatic Feedback on Programming Assignments

Autores
Paiva, JC; Leal, JP; Figueira, A;

Publicação
COMPUTER SCIENCE AND INFORMATION SYSTEMS

Abstract
Static source code analysis techniques are gaining relevance in automated assessment of programming assignments as they can provide less rigorous evaluation and more comprehensive and formative feedback. These techniques focus on source code aspects rather than requiring effective code execution. To this end, syntactic and semantic information encoded in textual data is typically represented internally as graphs, after parsing and other preprocessing stages. Static automated assessment techniques, therefore, draw inferences from intermediate representations to determine the correctness of a solution and derive feedback. Consequently, achieving the most effective semantic graph representation of source code for the specific task is critical, impacting both techniques' accuracy, outcome, and execution time. This paper aims to provide a thorough comparison of the most widespread semantic graph representations for the automated assessment of programming assignments, including usage examples, facets, and costs for each of these representations. A benchmark has been conducted to assess their cost using the Abstract Syntax Tree (AST) as a baseline. The results demonstrate that the Code Property Graph (CPG) is the most feature -rich representation, but also the largest and most space -consuming (about 33% more than AST).

2024

Topic Extraction: BERTopic's Insight into the 117th Congress's Twitterverse

Autores
Mendonça, M; Figueira, A;

Publicação
INFORMATICS-BASEL

Abstract
As social media (SM) becomes increasingly prevalent, its impact on society is expected to grow accordingly. While SM has brought positive transformations, it has also amplified pre-existing issues such as misinformation, echo chambers, manipulation, and propaganda. A thorough comprehension of this impact, aided by state-of-the-art analytical tools and by an awareness of societal biases and complexities, enables us to anticipate and mitigate the potential negative effects. One such tool is BERTopic, a novel deep-learning algorithm developed for Topic Mining, which has been shown to offer significant advantages over traditional methods like Latent Dirichlet Allocation (LDA), particularly in terms of its high modularity, which allows for extensive personalization at each stage of the topic modeling process. In this study, we hypothesize that BERTopic, when optimized for Twitter data, can provide a more coherent and stable topic modeling. We began by conducting a review of the literature on topic-mining approaches for short-text data. Using this knowledge, we explored the potential for optimizing BERTopic and analyzed its effectiveness. Our focus was on Twitter data spanning the two years of the 117th US Congress. We evaluated BERTopic's performance using coherence, perplexity, diversity, and stability scores, finding significant improvements over traditional methods and the default parameters for this tool. We discovered that improvements are possible in BERTopic's coherence and stability. We also identified the major topics of this Congress, which include abortion, student debt, and Judge Ketanji Brown Jackson. Additionally, we describe a simple application we developed for a better visualization of Congress topics.

2024

13th Symposium on Languages, Applications and Technologies, SLATE 2024, July 4-5, 2024, Águeda, Portugal

Autores
Rodrigues, M; Leal, JP; Portela, F;

Publicação
SLATE

Abstract
[No abstract available]

2024

Early Findings in Using LLMs to Assess Semantic Relations Strength (Short Paper)

Autores
dos Santos, AF; Leal, JP;

Publicação
SLATE

Abstract
Semantic measure (SM) algorithms allow software to mimic the human ability of assessing the strength of the semantic relations between elements such as concepts, entities, words, or sentences. SM algorithms are typically evaluated by comparison against gold standard datasets built by human annotators. These datasets are composed of pairs of elements and an averaged numeric rating. Building such datasets usually requires asking human annotators to assign a numeric value to their perception of the strength of the semantic relation between two elements. Large language models (LLMs) have recently been successfully used to perform tasks which previously required human intervention, such as text summarization, essay writing, image description, image synthesis, question answering, and so on. In this paper, we present ongoing research on LLMs capabilities for semantic relations assessment. We queried several LLMs to rate the relationship of pairs of elements from existing semantic measures evaluation datasets, and measured the correlation between the results from the LLMs and gold standard datasets. Furthermore, we performed additional experiments to evaluate which other factors can influence LLMs performance in this task. We present and discuss the results obtained so far.

  • 11
  • 207