2024
Authors
dos Santos, AF; Leal, JP;
Publication
13th Symposium on Languages, Applications and Technologies, SLATE 2024, July 4-5, 2024, Águeda, Portugal
Abstract
2025
Authors
Paiva, JC; Leal, JP; Figueira, A;
Publication
ELECTRONICS
Abstract
Automated assessment tools for programming assignments have become increasingly popular in computing education. These tools offer a cost-effective and highly available way to provide timely and consistent feedback to students. However, when evaluating a logically incorrect source code, there are some reasonable concerns about the formative gap in the feedback generated by such tools compared to that of human teaching assistants. A teaching assistant either pinpoints logical errors, describes how the program fails to perform the proposed task, or suggests possible ways to fix mistakes without revealing the correct code. On the other hand, automated assessment tools typically return a measure of the program's correctness, possibly backed by failing test cases and, only in a few cases, fixes to the program. In this paper, we introduce a tool, AsanasAssist, to generate formative feedback messages to students to repair functionality mistakes in the submitted source code based on the most similar algorithmic strategy solution. These suggestions are delivered with incremental levels of detail according to the student's needs, from identifying the block containing the error to displaying the correct source code. Furthermore, we evaluate how well the automatically generated messages provided by AsanasAssist match those provided by a human teaching assistant. The results demonstrate that the tool achieves feedback comparable to that of a human grader while being able to provide it just in time.
2024
Authors
dos Santos, AF; Leal, JP;
Publication
OpenAccess Series in Informatics
Abstract
Semantic measure (SM) algorithms allow software to mimic the human ability of assessing the strength of the semantic relations between elements such as concepts, entities, words, or sentences. SM algorithms are typically evaluated by comparison against gold standard datasets built by human annotators. These datasets are composed of pairs of elements and an averaged numeric rating. Building such datasets usually requires asking human annotators to assign a numeric value to their perception of the strength of the semantic relation between two elements. Large language models (LLMs) have recently been successfully used to perform tasks which previously required human intervention, such as text summarization, essay writing, image description, image synthesis, question answering, and so on. In this paper, we present ongoing research on LLMs capabilities for semantic relations assessment. We queried several LLMs to rate the relationship of pairs of elements from existing semantic measures evaluation datasets, and measured the correlation between the results from the LLMs and gold standard datasets. Furthermore, we performed additional experiments to evaluate which other factors can influence LLMs performance in this task. We present and discuss the results obtained so far. © André Fernandes dos Santos and José Paulo Leal.
2019
Authors
Alves R.A.; Leal J.P.; Limpo T.;
Publication
Studies in Writing
Abstract
2015
Authors
José-Luis Sierra-Rodríguez; José-Paulo Leal; Alberto Simões;
Publication
Abstract
2025
Authors
Fernandes dos Santos, A; Leal, JP; Alves, RA; Jacques, T;
Publication
Data in Brief
Abstract
The PAP900 dataset centers on the semantic relationship between affective words in Portuguese. It contains 900 word pairs, each annotated by at least 30 human raters for both semantic similarity and semantic relatedness. In addition to the semantic ratings, the dataset includes the word categorization used to build the word pairs and detailed sociodemographic information about annotators, enabling the analysis of the influence of personal factors on the perception of semantic relationships. Furthermore, this article describes in detail the dataset construction process, from word selection to agreement metrics. Data was collected from Portuguese university psychology students, who completed two rounds of questionnaires. In the first round annotators were asked to rate word pairs on either semantic similarity or relatedness. The second round switched the relation type for most annotators, with a small percentage being asked to repeat the same relation. The instructions given emphasized the differences between semantic relatedness and semantic similarity, and provided examples of expected ratings of both. There are few semantic relations datasets in Portuguese, and none focusing on affective words. PAP900 is distributed in distinct formats to be easy to use for both researchers just looking for the final averaged values and for researchers looking to take advantage of the individual ratings, the word categorization and the annotator data. This dataset is a valuable resource for researchers in computational linguistics, natural language processing, psychology, and cognitive science. © 2025 The Authors
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.