Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    José Carlos Paiva
  • Cargo

    Assistente de Investigação
  • Desde

    01 agosto 2014
001
Publicações

2024

Comparing semantic graph representations of source code: The case of automatic feedback on programming assignments

Autores
Paiva, JC; Leal, JP; Figueira, A;

Publicação
Comput. Sci. Inf. Syst.

Abstract
Static source code analysis techniques are gaining relevance in automated assessment of programming assignments as they can provide less rigorous evaluation and more comprehensive and formative feedback. These techniques fo-cus on source code aspects rather than requiring effective code execution. To this end, syntactic and semantic information encoded in textual data is typically rep-resented internally as graphs, after parsing and other preprocessing stages. Static automated assessment techniques, therefore, draw inferences from intermediate representations to determine the correctness of a solution and derive feedback. Conse-quently, achieving the most effective semantic graph representation of source code for the specific task is critical, impacting both techniques’ accuracy, outcome, and execution time. This paper aims to provide a thorough comparison of the most widespread semantic graph representations for the automated assessment of programming assignments, including usage examples, facets, and costs for each of these representations. A benchmark has been conducted to assess their cost using the Abstract Syntax Tree (AST) as a baseline. The results demonstrate that the Code Property Graph (CPG) is the most feature-rich representation, but also the largest and most space-consuming (about 33% more than AST).

2023

PROGpedia: Collection of source-code submitted to introductory programming assignments

Autores
Paiva, JC; Leal, JP; Figueira, A;

Publicação
DATA IN BRIEF

Abstract
Learning how to program is a difficult task. To acquire the re-quired skills, novice programmers must solve a broad range of programming activities, always supported with timely, rich, and accurate feedback. Automated assessment tools play a major role in fulfilling these needs, being a common pres-ence in introductory programming courses. As programming exercises are not easy to produce and those loaded into these tools must adhere to specific format requirements, teachers often opt for reusing them for several years. There-fore, most automated assessment tools, particularly Mooshak, store hundreds of submissions to the same programming ex-ercises, as these need to be kept after automatically pro-cessed for possible subsequent manual revision. Our dataset consists of the submissions to 16 programming exercises in Mooshak proposed in multiple years within the 2003-2020 timespan to undergraduate Computer Science students at the Faculty of Sciences from the University of Porto. In particular, we extract their code property graphs and store them as CSV files. The analysis of this data can enable, for instance, the generation of more concise and personalized feedback based on similar accepted submissions in the past, the identifica-tion of different strategies to solve a problem, the under -standing of a student's thinking process, among many other findings.(c) 2023 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY license ( http://creativecommons.org/licenses/by/4.0/ )

2023

Bibliometric Analysis of Automated Assessment in Programming Education: A Deeper Insight into Feedback

Autores
Paiva, JC; Figueira, A; Leal, JP;

Publicação
ELECTRONICS

Abstract
Learning to program requires diligent practice and creates room for discovery, trial and error, debugging, and concept mapping. Learners must walk this long road themselves, supported by appropriate and timely feedback. Providing such feedback in programming exercises is not a humanly feasible task. Therefore, the early and steadily growing interest of computer science educators in the automated assessment of programming exercises is not surprising. The automated assessment of programming assignments has been an active area of research for over a century, and interest in it continues to grow as it adapts to new developments in computer science and the resulting changes in educational requirements. It is therefore of paramount importance to understand the work that has been performed, who has performed it, its evolution over time, the relationships between publications, its hot topics, and open problems, among others. This paper presents a bibliometric study of the field, with a particular focus on the issue of automatic feedback generation, using literature data from the Web of Science Core Collection. It includes a descriptive analysis using various bibliometric measures and data visualizations on authors, affiliations, citations, and topics. In addition, we performed a complementary analysis focusing only on the subset of publications on the specific topic of automatic feedback generation. The results are highlighted and discussed.

2023

FGPE+: The Mobile FGPE Environment and the Pareto-Optimized Gamified Programming Exercise Selection Model-An Empirical Evaluation

Autores
Maskeliunas, R; Damasevicius, R; Blazauskas, T; Swacha, J; Queiros, R; Paiva, JC;

Publicação
COMPUTERS

Abstract
This paper is poised to inform educators, policy makers and software developers about the untapped potential of PWAs in creating engaging, effective, and personalized learning experiences in the field of programming education. We aim to address a significant gap in the current understanding of the potential advantages and underutilisation of Progressive Web Applications (PWAs) within the education sector, specifically for programming education. Despite the evident lack of recognition of PWAs in this arena, we present an innovative approach through the Framework for Gamification in Programming Education (FGPE). This framework takes advantage of the ubiquity and ease of use of PWAs, integrating it with a Pareto optimized gamified programming exercise selection model ensuring personalized adaptive learning experiences by dynamically adjusting the complexity, content, and feedback of gamified exercises in response to the learners' ongoing progress and performance. This study examines the mobile user experience of the FGPE PLE in different countries, namely Poland and Lithuania, providing novel insights into its applicability and efficiency. Our results demonstrate that combining advanced adaptive algorithms with the convenience of mobile technology has the potential to revolutionize programming education. The FGPE+ course group outperformed the Moodle group in terms of the average perceived knowledge (M = 4.11, SD = 0.51).

2023

GATUGU: Six Perspectives of Evaluation of Gamified Systems

Autores
Swacha, J; Queiros, R; Paiva, JC;

Publicação
INFORMATION

Abstract
As gamification spreads to new areas, new applications are being developed and the interest in evaluating gamified systems continues to grow. To date, however, no one has comprehensively approached this topic: multiple evaluation dimensions and measures have been proposed and applied without any effort to organize them into a full gamut of tools for the multi-dimensional evaluation of gamified systems. This paper addresses this gap by proposing GATUGU, a set of six perspectives of evaluation of gamified systems: General effects of gamification, Area-specific effects of gamification, Technical quality of gamified systems, Use of gamified systems, Gamefulness of gamified systems, and User experience of gamified systems. For each perspective, GATUGU indicates the relevant dimensions of evaluation, and, for each dimension, one measure is suggested. GATUGU does not introduce any new measurement tools but merely recommends one of the available tools for each dimension, considering their popularity and ease of use. GATUGU can guide researchers in selecting gamification system evaluation perspectives and dimensions and in finding adequate measurement tools. Thanks to conforming to GATUGU, the published gamification system evaluation results will become easier to compare and to perform various kinds of meta-analyses on them.