2024
Authors
Paiva, JC; Leal, JP; Figueira, A;
Publication
COMPUTER SCIENCE AND INFORMATION SYSTEMS
Abstract
Static source code analysis techniques are gaining relevance in automated assessment of programming assignments as they can provide less rigorous evaluation and more comprehensive and formative feedback. These techniques focus on source code aspects rather than requiring effective code execution. To this end, syntactic and semantic information encoded in textual data is typically represented internally as graphs, after parsing and other preprocessing stages. Static automated assessment techniques, therefore, draw inferences from intermediate representations to determine the correctness of a solution and derive feedback. Consequently, achieving the most effective semantic graph representation of source code for the specific task is critical, impacting both techniques' accuracy, outcome, and execution time. This paper aims to provide a thorough comparison of the most widespread semantic graph representations for the automated assessment of programming assignments, including usage examples, facets, and costs for each of these representations. A benchmark has been conducted to assess their cost using the Abstract Syntax Tree (AST) as a baseline. The results demonstrate that the Code Property Graph (CPG) is the most feature -rich representation, but also the largest and most space -consuming (about 33% more than AST).
2024
Authors
Mendonça, M; Figueira, A;
Publication
INFORMATICS-BASEL
Abstract
As social media (SM) becomes increasingly prevalent, its impact on society is expected to grow accordingly. While SM has brought positive transformations, it has also amplified pre-existing issues such as misinformation, echo chambers, manipulation, and propaganda. A thorough comprehension of this impact, aided by state-of-the-art analytical tools and by an awareness of societal biases and complexities, enables us to anticipate and mitigate the potential negative effects. One such tool is BERTopic, a novel deep-learning algorithm developed for Topic Mining, which has been shown to offer significant advantages over traditional methods like Latent Dirichlet Allocation (LDA), particularly in terms of its high modularity, which allows for extensive personalization at each stage of the topic modeling process. In this study, we hypothesize that BERTopic, when optimized for Twitter data, can provide a more coherent and stable topic modeling. We began by conducting a review of the literature on topic-mining approaches for short-text data. Using this knowledge, we explored the potential for optimizing BERTopic and analyzed its effectiveness. Our focus was on Twitter data spanning the two years of the 117th US Congress. We evaluated BERTopic's performance using coherence, perplexity, diversity, and stability scores, finding significant improvements over traditional methods and the default parameters for this tool. We discovered that improvements are possible in BERTopic's coherence and stability. We also identified the major topics of this Congress, which include abortion, student debt, and Judge Ketanji Brown Jackson. Additionally, we describe a simple application we developed for a better visualization of Congress topics.
2024
Authors
Rodrigues, M; Leal, JP; Portela, F;
Publication
SLATE
Abstract
[No abstract available]
2024
Authors
dos Santos, AF; Leal, JP;
Publication
13th Symposium on Languages, Applications and Technologies, SLATE 2024, July 4-5, 2024, Águeda, Portugal
Abstract
Semantic measure (SM) algorithms allow software to mimic the human ability of assessing the strength of the semantic relations between elements such as concepts, entities, words, or sentences. SM algorithms are typically evaluated by comparison against gold standard datasets built by human annotators. These datasets are composed of pairs of elements and an averaged numeric rating. Building such datasets usually requires asking human annotators to assign a numeric value to their perception of the strength of the semantic relation between two elements. Large language models (LLMs) have recently been successfully used to perform tasks which previously required human intervention, such as text summarization, essay writing, image description, image synthesis, question answering, and so on. In this paper, we present ongoing research on LLMs capabilities for semantic relations assessment. We queried several LLMs to rate the relationship of pairs of elements from existing semantic measures evaluation datasets, and measured the correlation between the results from the LLMs and gold standard datasets. Furthermore, we performed additional experiments to evaluate which other factors can influence LLMs performance in this task. We present and discuss the results obtained so far. © André Fernandes dos Santos and José Paulo Leal.
2024
Authors
Bauer, Y; Leal, JP; Queirós, R;
Publication
5th International Computer Programming Education Conference, ICPEC 2024, June 27-28, 2024, Lisbon, Portugal
Abstract
Generative AI presents both challenges and opportunities for educators. This paper explores its potential for automating the creation of programming exercises designed for automated assessment. Traditionally, creating these exercises is a time-intensive and error-prone task that involves developing exercise statements, solutions, and test cases. This ongoing research analyzes the capabilities of the OpenAI GPT API to automatically create these components. An experiment using the OpenAI GPT API to automatically create 120 programming exercises produced interesting results, such as the difficulties encountered in generating valid JSON formats and creating matching test cases for solution code. Learning from this experiment, an enhanced feature was developed to assist teachers in creating programming exercises and was integrated into Agni, a virtual learning environment (VLE). Despite the challenges in generating entirely correct programming exercises, this approach shows potential for reducing the time required to create exercises, thus significantly aiding teachers. The evaluation of this approach, comparing the efficiency and usefulness of using the OpenAI GPT API or authoring the exercises oneself, is in progress. © Yannik Bauer, José Paulo Leal, and Ricardo Queirós;
2024
Authors
Filgueiras, A; Marques, ERB; Lopes, LMB; Marques, M; Silva, H;
Publication
CoRR
Abstract
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.