Detalhes
Nome
Álvaro FigueiraCluster
InformáticaCargo
Responsável de ÁreaDesde
01 março 2009
Nacionalidade
PortugalCentro
Centro de Sistemas de Computação AvançadaContactos
+351220402963
alvaro.figueira@inesctec.pt
2023
Autores
Paiva, JC; Leal, JP; Figueira, A;
Publicação
DATA IN BRIEF
Abstract
Learning how to program is a difficult task. To acquire the re-quired skills, novice programmers must solve a broad range of programming activities, always supported with timely, rich, and accurate feedback. Automated assessment tools play a major role in fulfilling these needs, being a common pres-ence in introductory programming courses. As programming exercises are not easy to produce and those loaded into these tools must adhere to specific format requirements, teachers often opt for reusing them for several years. There-fore, most automated assessment tools, particularly Mooshak, store hundreds of submissions to the same programming ex-ercises, as these need to be kept after automatically pro-cessed for possible subsequent manual revision. Our dataset consists of the submissions to 16 programming exercises in Mooshak proposed in multiple years within the 2003-2020 timespan to undergraduate Computer Science students at the Faculty of Sciences from the University of Porto. In particular, we extract their code property graphs and store them as CSV files. The analysis of this data can enable, for instance, the generation of more concise and personalized feedback based on similar accepted submissions in the past, the identifica-tion of different strategies to solve a problem, the under -standing of a student's thinking process, among many other findings.(c) 2023 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY license ( http://creativecommons.org/licenses/by/4.0/ )
2023
Autores
David, F; Guimaraes, N; Figueira, A;
Publicação
Procedia Computer Science
Abstract
2023
Autores
Paiva, JC; Figueira, A; Leal, JP;
Publicação
ELECTRONICS
Abstract
Learning to program requires diligent practice and creates room for discovery, trial and error, debugging, and concept mapping. Learners must walk this long road themselves, supported by appropriate and timely feedback. Providing such feedback in programming exercises is not a humanly feasible task. Therefore, the early and steadily growing interest of computer science educators in the automated assessment of programming exercises is not surprising. The automated assessment of programming assignments has been an active area of research for over a century, and interest in it continues to grow as it adapts to new developments in computer science and the resulting changes in educational requirements. It is therefore of paramount importance to understand the work that has been performed, who has performed it, its evolution over time, the relationships between publications, its hot topics, and open problems, among others. This paper presents a bibliometric study of the field, with a particular focus on the issue of automatic feedback generation, using literature data from the Web of Science Core Collection. It includes a descriptive analysis using various bibliometric measures and data visualizations on authors, affiliations, citations, and topics. In addition, we performed a complementary analysis focusing only on the subset of publications on the specific topic of automatic feedback generation. The results are highlighted and discussed.
2023
Autores
Figueira, A; Nascimento, L;
Publicação
Web Information Systems and Technologies
Abstract
2023
Autores
Espinosa, E; Figueira, A;
Publicação
MATHEMATICS
Abstract
Class imbalance is a common issue while developing classification models. In order to tackle this problem, synthetic data have recently been developed to enhance the minority class. These artificially generated samples aim to bolster the representation of the minority class. However, evaluating the suitability of such generated data is crucial to ensure their alignment with the original data distribution. Utility measures come into play here to quantify how similar the distribution of the generated data is to the original one. For tabular data, there are various evaluation methods that assess different characteristics of the generated data. In this study, we collected utility measures and categorized them based on the type of analysis they performed. We then applied these measures to synthetic data generated from two well-known datasets, Adults Income, and Liar+. We also used five well-known generative models, Borderline SMOTE, DataSynthesizer, CTGAN, CopulaGAN, and REaLTabFormer, to generate the synthetic data and evaluated its quality using the utility measures. The measurements have proven to be informative, indicating that if one synthetic dataset is superior to another in terms of utility measures, it will be more effective as an augmentation for the minority class when performing classification tasks.
Teses supervisionadas
2022
Autor
José Carlos Costa Paiva
Instituição
UP-FCUP
2022
Autor
Bruno Gonçalves Vaz
Instituição
UP-FCUP
2022
Autor
Pedro Miguel Tavares da Silva Gonçalves
Instituição
UP-FCUP
2022
Autor
Miguel Ângelo Pontes Rebelo
Instituição
UP-FCUP
2022
Autor
Nuno Ricardo Pinheiro da Silva Guimarães
Instituição
UP-FCUP
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.