2019
Authors
Alves R.A.; Leal J.P.; Limpo T.;
Publication
Studies in Writing
Abstract
2025
Authors
Paiva, JC; Leal, JP; Figueira, A;
Publication
ELECTRONICS
Abstract
Automated assessment tools for programming assignments have become increasingly popular in computing education. These tools offer a cost-effective and highly available way to provide timely and consistent feedback to students. However, when evaluating a logically incorrect source code, there are some reasonable concerns about the formative gap in the feedback generated by such tools compared to that of human teaching assistants. A teaching assistant either pinpoints logical errors, describes how the program fails to perform the proposed task, or suggests possible ways to fix mistakes without revealing the correct code. On the other hand, automated assessment tools typically return a measure of the program's correctness, possibly backed by failing test cases and, only in a few cases, fixes to the program. In this paper, we introduce a tool, AsanasAssist, to generate formative feedback messages to students to repair functionality mistakes in the submitted source code based on the most similar algorithmic strategy solution. These suggestions are delivered with incremental levels of detail according to the student's needs, from identifying the block containing the error to displaying the correct source code. Furthermore, we evaluate how well the automatically generated messages provided by AsanasAssist match those provided by a human teaching assistant. The results demonstrate that the tool achieves feedback comparable to that of a human grader while being able to provide it just in time.
2024
Authors
Rodrigues, M; Leal, JP; Portela, F;
Publication
SLATE
Abstract
[No abstract available]
2024
Authors
dos Santos, AF; Leal, JP;
Publication
13th Symposium on Languages, Applications and Technologies, SLATE 2024, July 4-5, 2024, Águeda, Portugal
Abstract
Semantic measure (SM) algorithms allow software to mimic the human ability of assessing the strength of the semantic relations between elements such as concepts, entities, words, or sentences. SM algorithms are typically evaluated by comparison against gold standard datasets built by human annotators. These datasets are composed of pairs of elements and an averaged numeric rating. Building such datasets usually requires asking human annotators to assign a numeric value to their perception of the strength of the semantic relation between two elements. Large language models (LLMs) have recently been successfully used to perform tasks which previously required human intervention, such as text summarization, essay writing, image description, image synthesis, question answering, and so on. In this paper, we present ongoing research on LLMs capabilities for semantic relations assessment. We queried several LLMs to rate the relationship of pairs of elements from existing semantic measures evaluation datasets, and measured the correlation between the results from the LLMs and gold standard datasets. Furthermore, we performed additional experiments to evaluate which other factors can influence LLMs performance in this task. We present and discuss the results obtained so far. © André Fernandes dos Santos and José Paulo Leal.
2014
Authors
Lukovic, I; Budimac, Z; Leal, JP; Janousek, J; Rocha, A; Burdescu, DD; Dragan, D;
Publication
COMPUTER SCIENCE AND INFORMATION SYSTEMS
Abstract
2021
Authors
Swacha, J; Naprawski, T; Queirós, R; Paiva, JC; Leal, JP; de Vita, CG; Mellone, G; Montella, R; Ljubenkov, D; Kosta, S;
Publication
Proceedings of the Information Systems Education Conference, ISECON
Abstract
Computer programming courses are considered difficult. They can be made more engaging for students by incorporating game elements in a process known as gamification. In order to make it easier to facilitate this process in practice, several European universities collaborated in a joint project aimed at developing a framework for application of gamification to programming education. The framework includes the specification of the gamification scheme and the exercise definition format, an open source toolkit for preparing the gamified exercises and an interactive learning environment to present them to the students, and, last but not least, an open source collection of gamified programming exercises. In this paper, we present a work-in-progress on the last element, describing the current contents of the collection and planned directions for its extension.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.