Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Jácome Costa Cunha
  • Cargo

    Investigador
  • Desde

    01 novembro 2011
Publicações

2026

A framework for supporting the reproducibility of computational experiments in multiple scientific domains

Autores
Costa, L; Barbosa, S; Cunha, J;

Publicação
Future Gener. Comput. Syst.

Abstract
In recent years, the research community, but also the general public, has raised serious questions about the reproducibility and replicability of scientific work. Since many studies include some kind of computational work, these issues are also a technological challenge, not only in computer science, but also in most research domains. Computational replicability and reproducibility are not easy to achieve due to the variety of computational environments that can be used. Indeed, it is challenging to recreate the same environment via the same frameworks, code, programming languages, dependencies, and so on. We propose a framework, known as SciRep, that supports the configuration, execution, and packaging of computational experiments by defining their code, data, programming languages, dependencies, databases, and commands to be executed. After the initial configuration, the experiments can be executed any number of times, always producing exactly the same results. Our approach allows the creation of a reproducibility package for experiments from multiple scientific fields, from medicine to computer science, which can be re-executed on any computer. The produced package acts as a capsule, holding absolutely everything necessary to re-execute the experiment. To evaluate our framework, we compare it with three state-of-the-art tools and use it to reproduce 18 experiments extracted from published scientific articles. With our approach, we were able to execute 16 (89%) of those experiments, while the others reached only 61%, thus showing that our approach is effective. Moreover, all the experiments that were executed produced the results presented in the original publication. Thus, SciRep was able to reproduce 100% of the experiments it could run. © 2025 The Authors

2025

Modelling sustainability in cyber-physical systems: A systematic mapping study

Autores
Barisic, A; Cunha, J; Ruchkin, I; Moreira, A; Araújo, J; Challenger, M; Savic, D; Amaral, V;

Publicação
SUSTAINABLE COMPUTING-INFORMATICS & SYSTEMS

Abstract
Supporting sustainability through modelling and analysis has become an active area of research in Software Engineering. Therefore, it is important and timely to survey the current state of the art in sustainability in Cyber-Physical Systems (CPS), one of the most rapidly evolving classes of complex software systems. This work presents the findings of a Systematic Mapping Study (SMS) that aims to identify key primary studies reporting on CPS modelling approaches that address sustainability over the last 10 years. Our literature search retrieved 2209 papers, of which 104 primary studies were deemed relevant fora detailed characterisation. These studies were analysed based on nine research questions designed to extract information on sustainability attributes, methods, models/meta-models, metrics, processes, and tools used to improve the sustainability of CPS. These questions also aimed to gather data on domain-specific modelling approaches and relevant application domains. The final results report findings for each of our questions, highlight interesting correlations among them, and identify literature gaps worth investigating in the near future.

2025

Let's Talk About It: Making Scientific Computational Reproducibility Easy

Autores
Costa, L; Barbosa, S; Cunha, J;

Publicação
CoRR

Abstract

2025

CompRep: A Dataset For Computational Reproducibility

Autores
Costa, L; Barbosa, S; Cunha, J;

Publicação
PROCEEDINGS OF THE 3RD ACM CONFERENCE ON REPRODUCIBILITY AND REPLICABILITY, ACM REP 2025

Abstract
Reproducibility in computational science is increasingly dependent on the ability to faithfully re-execute experiments involving code, data, and software environments. However, assessing the effectiveness of reproducibility tools is difficult due to the lack of standardized benchmarks. To address this, we collected 38 computational experiments from diverse scientific domains and attempted to reproduce each using 8 different reproducibility tools. From this initial pool, we identified 18 experiments that could be successfully reproduced using at least one tool. These experiments form our curated benchmark dataset, which we release along with reproducibility packages to support ongoing evaluation efforts. This article introduces the curated dataset, incorporating details about software dependencies, execution steps, and configurations necessary for accurate reproduction. The dataset is structured to reflect diverse computational requirements and methodologies, ranging from simple scripts to complex, multi-language workflows, ensuring it presents the wide range of challenges researchers face in reproducing computational studies. It provides a universal benchmark by establishing a standardized dataset for objectively evaluating and comparing the effectiveness of reproducibility tools. Each experiment included in the dataset is carefully documented to ensure ease of use. We added clear instructions following a standard, so each experiment has the same kind of instructions, making it easier for researchers to run each of them with their own reproducibility tool.The utility of the dataset is demonstrated through extensive evaluations using multiple reproducibility tools.

2025

Mind the gap: The missing features of the tools to support user studies in software engineering

Autores
Costa, L; Barbosa, S; Cunha, J;

Publicação
JOURNAL OF COMPUTER LANGUAGES

Abstract
User studies are paramount for advancing research in software engineering, particularly when evaluating tools and techniques involving programmers. However, researchers face several barriers when performing them despite the existence of supporting tools. We base our study on a set of tools and researcher-reported barriers identified in prior work on user studies in software engineering. In this work, we study how existing tools and their features cope with previously identified barriers. Moreover, we propose new features for the barriers that lack support. We validated our proposal with 102 researchers, achieving statistically significant positive support for all but one feature. We study the current gap between tools and barriers, using features as the bridge. We show there is a significant lack of support for several barriers, as some have no single tool to support them.

Teses
supervisionadas

2023

Deploy-Oriented Specification of Cloud Native Applications

Autor
André Daniel Alves Gomes

Instituição
UP-FEUP

2023

Visually-assisted Decomposition of Monoliths to Microservices

Autor
Breno da Fonseca Salles

Instituição
UP-FEUP

2023

Defining Metrics for the Identification of Microservices in Code Repositories

Autor
Domingos Francisco Panta Junior

Instituição
UP-FEUP

2023

Designing, Implementing, and Deploying a Better Customer-oriented, Secure REST API for Invoicing Software

Autor
Miguel Rodrigues Gomes

Instituição
UP-FEUP

2022

Characterizing Data Scientists in the Real World

Autor
Paula Sofia da Cunha Pereira

Instituição
UP-FEUP