Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por CRACS

2024

GERF - Gamified Educational Virtual Escape Room Framework for Innovative Micro-Learning and Adaptive Learning Experiences

Autores
Queirós, R;

Publicação
Communications in Computer and Information Science

Abstract
This paper introduces GERF, a Gamified Educational Virtual Escape Room Framework designed to enhance micro-learning and adaptive learning experiences in educational settings. The framework incorporates a user taxonomy based on the user type hexad, addressing the preferences and motivations of different learners profiles. GERF focuses on two key facets: interoperability and analytics. To ensure seamless integration of Escape Room (ER) platforms with Learning Management Systems (LMS), the Learning Tools Interoperability (LTI) specification is used. This enables smooth and efficient communication between ERs and LMS platforms. Additionally, GERF uses the xAPI specification to capture and transmit experiential data in the form of xAPI statements, which are then sent to a Learning Record Store (LRS). By leveraging these learning analytics, educators gain valuable insights into students’ interactions within the ER, facilitating the adaptation of learning content based on individual learning needs. Ultimately, GERF empowers educators to create personalized learning experiences within the ER environment, fostering student engagement and learning outcomes. © 2024, The Author(s), under exclusive license to Springer Nature Switzerland AG.

2023

PROGpedia: Collection of source-code submitted to introductory programming assignments

Autores
Paiva, JC; Leal, JP; Figueira, A;

Publicação
DATA IN BRIEF

Abstract
Learning how to program is a difficult task. To acquire the re-quired skills, novice programmers must solve a broad range of programming activities, always supported with timely, rich, and accurate feedback. Automated assessment tools play a major role in fulfilling these needs, being a common pres-ence in introductory programming courses. As programming exercises are not easy to produce and those loaded into these tools must adhere to specific format requirements, teachers often opt for reusing them for several years. There-fore, most automated assessment tools, particularly Mooshak, store hundreds of submissions to the same programming ex-ercises, as these need to be kept after automatically pro-cessed for possible subsequent manual revision. Our dataset consists of the submissions to 16 programming exercises in Mooshak proposed in multiple years within the 2003-2020 timespan to undergraduate Computer Science students at the Faculty of Sciences from the University of Porto. In particular, we extract their code property graphs and store them as CSV files. The analysis of this data can enable, for instance, the generation of more concise and personalized feedback based on similar accepted submissions in the past, the identifica-tion of different strategies to solve a problem, the under -standing of a student's thinking process, among many other findings.(c) 2023 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY license ( http://creativecommons.org/licenses/by/4.0/ )

2023

A WebApp for Reliability Detection in Social Media

Autores
David, F; Guimaraes, N; Figueira, A;

Publicação
Procedia Computer Science

Abstract

2023

Bibliometric Analysis of Automated Assessment in Programming Education: A Deeper Insight into Feedback

Autores
Paiva, JC; Figueira, A; Leal, JP;

Publicação
ELECTRONICS

Abstract
Learning to program requires diligent practice and creates room for discovery, trial and error, debugging, and concept mapping. Learners must walk this long road themselves, supported by appropriate and timely feedback. Providing such feedback in programming exercises is not a humanly feasible task. Therefore, the early and steadily growing interest of computer science educators in the automated assessment of programming exercises is not surprising. The automated assessment of programming assignments has been an active area of research for over a century, and interest in it continues to grow as it adapts to new developments in computer science and the resulting changes in educational requirements. It is therefore of paramount importance to understand the work that has been performed, who has performed it, its evolution over time, the relationships between publications, its hot topics, and open problems, among others. This paper presents a bibliometric study of the field, with a particular focus on the issue of automatic feedback generation, using literature data from the Web of Science Core Collection. It includes a descriptive analysis using various bibliometric measures and data visualizations on authors, affiliations, citations, and topics. In addition, we performed a complementary analysis focusing only on the subset of publications on the specific topic of automatic feedback generation. The results are highlighted and discussed.

2023

On the Quality of Synthetic Generated Tabular Data

Autores
Espinosa, E; Figueira, A;

Publicação
MATHEMATICS

Abstract
Class imbalance is a common issue while developing classification models. In order to tackle this problem, synthetic data have recently been developed to enhance the minority class. These artificially generated samples aim to bolster the representation of the minority class. However, evaluating the suitability of such generated data is crucial to ensure their alignment with the original data distribution. Utility measures come into play here to quantify how similar the distribution of the generated data is to the original one. For tabular data, there are various evaluation methods that assess different characteristics of the generated data. In this study, we collected utility measures and categorized them based on the type of analysis they performed. We then applied these measures to synthetic data generated from two well-known datasets, Adults Income, and Liar+. We also used five well-known generative models, Borderline SMOTE, DataSynthesizer, CTGAN, CopulaGAN, and REaLTabFormer, to generate the synthetic data and evaluated its quality using the utility measures. The measurements have proven to be informative, indicating that if one synthetic dataset is superior to another in terms of utility measures, it will be more effective as an augmentation for the minority class when performing classification tasks.

2023

Jay: A software framework for prototyping and evaluating offloading applications in hybrid edge clouds

Autores
Silva, J; Marques, ERB; Lopes, LMB; Silva, FMA;

Publicação
SOFTWARE-PRACTICE & EXPERIENCE

Abstract
We present Jay, a software framework for offloading applications in hybrid edge clouds. Jay provides an API, services, and tools that enable mobile application developers to implement, instrument, and evaluate offloading applications using configurable cloud topologies, offloading strategies, and job types. We start by presenting Jay's job model and the concrete architecture of the framework. We then present the programming API with several examples of customization. Then, we turn to the description of the internal implementation of Jay instances and their components. Finally, we describe the Jay Workbench, a tool that allows the setup, execution, and reproduction of experiments with networks of hosts with different resource capabilities organized with specific topologies. The complete source code for the framework and workbench is provided in a GitHub repository.

  • 1
  • 195