Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

I graduated in Mathematics Applied to Computer Science, from Faculty of Sciences (UP) in 1995, and took my MSc in Foundations of Advanced Information Technology, from Imperial College, London, in 1997. In 2004 I concluded my PhD in Computer Science in concurrent and distributed programming.

I am currently an Assistant Professor, with tenure, at Faculty of Sciences in University of Porto. My research interests are in the areas of text and web mining, community detection, e-learning and web-based learning and standards in education.

I'm also a researcher in the CRACS Research Unit where I have been leading international projects involving University of University of Porto, Texas at Austin, University of Coimbra and University of Aveiro, regarding the automatic detection of relevance in social networks.

Interest
Topics
Details

Details

002
Publications

2023

PROGpedia: Collection of Source-Code Submitted to Introductory Programming Assignments

Authors
Paiva, JC; Leal, JP; Figueira, A;

Publication
DATA IN BRIEF

Abstract

2023

A WebApp for Reliability Detection in Social Media

Authors
David, F; Guimarães, N; Figueira, Á;

Publication
Procedia Computer Science

Abstract

2023

Bibliometric Analysis of Automated Assessment in Programming Education: A Deeper Insight into Feedback

Authors
Paiva, JC; Figueira, Á; Leal, JP;

Publication
Electronics

Abstract
Learning to program requires diligent practice and creates room for discovery, trial and error, debugging, and concept mapping. Learners must walk this long road themselves, supported by appropriate and timely feedback. Providing such feedback in programming exercises is not a humanly feasible task. Therefore, the early and steadily growing interest of computer science educators in the automated assessment of programming exercises is not surprising. The automated assessment of programming assignments has been an active area of research for over a century, and interest in it continues to grow as it adapts to new developments in computer science and the resulting changes in educational requirements. It is therefore of paramount importance to understand the work that has been performed, who has performed it, its evolution over time, the relationships between publications, its hot topics, and open problems, among others. This paper presents a bibliometric study of the field, with a particular focus on the issue of automatic feedback generation, using literature data from the Web of Science Core Collection. It includes a descriptive analysis using various bibliometric measures and data visualizations on authors, affiliations, citations, and topics. In addition, we performed a complementary analysis focusing only on the subset of publications on the specific topic of automatic feedback generation. The results are highlighted and discussed.

2022

Automated Assessment in Computer Science Education: A State-of-the-Art Review

Authors
Paiva, JC; Leal, JP; Figueira, A;

Publication
ACM TRANSACTIONS ON COMPUTING EDUCATION

Abstract
Practical programming competencies are critical to the success in computer science education and go-to-market of fresh graduates. Acquiring the required level of skills is a long journey of discovery, trial and error, and optimization seeking through a broad range of programming activities that learners must perform themselves. It is not reasonable to consider that teachers could evaluate all attempts that the average learner should develop multiplied by the number of students enrolled in a course, much less in a timely, deeply, and fairly fashion. Unsurprisingly, exploring the formal structure of programs to automate the assessment of certain features has long been a hot topic among CS education practitioners. Assessing a program is considerably more complex than asserting its functional correctness, as the proliferation of tools and techniques in the literature over the past decades indicates. Program efficiency, behavior, readability, among many other features, assessed either statically or dynamically, are now also relevant for automatic evaluation. The outcome of an evaluation evolved from the primordial boolean values to information about errors and tips on how to advance, possibly taking into account similar solutions. This work surveys the state-of-the-art in the automated assessment of CS assignments, focusing on the supported types of exercises, security measures adopted, testing techniques used, type of feedback produced, and the information they offer the teacher to understand and optimize learning. A new era of automated assessment, capitalizing on static analysis techniques and containerization, has been identified. Furthermore, this review presents several other findings from the conducted review, discusses the current challenges of the field, and proposes some future research directions.

2022

What Makes a Movie Get Success? A Visual Analytics Approach

Authors
Vaz, B; Barros, MD; Lavoura, MJ; Figueira, A;

Publication
MARKETING AND SMART TECHNOLOGIES, VOL 1

Abstract

Supervised
thesis

2022

Predictive Geovisual Analytics, using data streams fusion, for Risk Monitoring and Early Warning Systems optimization

Author
Pedro Miguel Tavares da Silva Gonçalves

Institution
UP-FCUP

2022

Using GANs to create synthetic datasets for fake news detection models

Author
Bruno Gonçalves Vaz

Institution
UP-FCUP

2022

Towards realistic scenarios concerning the identification of unreliable information in social networks

Author
Nuno Ricardo Pinheiro da Silva Guimarães

Institution
UP-FCUP

2022

Recommendation System for the News Market

Author
Miguel Ângelo Pontes Rebelo

Institution
UP-FCUP

2022

Reasoning on Semantic Representations of Source Code to Support Programming Education

Author
José Carlos Costa Paiva

Institution
UP-FCUP