Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

João Pascoal Faria tem um doutoramento em Engenharia Electrotécnica e de Computadores pela Faculdade de Engenharia da Universidade do Porto em 1999, onde é atualmente Professor Associado no Departamento de Engenharia Informática e Diretor do Mestrado Integrado em Engenharia Informática e Computação. É membro do Grupo de Investigação em Engenharia de Software (softeng.fe.up.pt) e investigador do INESC TEC, onde coordena a área de Engenharia de Software. Representa a FEUP e o INESC TEC na Comissão Técnica de Sistemas de Informação para a Saúde (CT 199) e a FEUP como Presidente da Comissão Setorial para a Qualidade das Tecnologia da Informação e das Comunicações (CS/03), no âmbito do Instituto Português da Qualidade (IPQ). No passado, trabalhou com várias empresas de software (Novabase Saúde, Sidereus, Medidata) e foi co-fundador de outras duas (QualiSoft e Strongstep). Tem mais de 25 anos de experiência em ensino, investigação, desenvolvimento e consultoria em diversas áreas de engenharia de software. É o principal autor de uma ferramenta de desenvolvimento rápido de aplicações (SAGA), com base em linguagens específicas de domínio, com mais de 25 anos de presença no mercado e evolução (1989-presente). Está atualmente envolvido em projectos de investigação, supervisões e atividades de consultoria nas áreas de teste de software baseado em modelos, melhoria de processos de software e desenvolvimento conduzido por modelos.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    João Pascoal Faria
  • Cargo

    Investigador Coordenador
  • Desde

    14 outubro 1985
Publicações

2025

LLM Prompt Engineering for Automated White-Box Integration Test Generation in REST APIs

Autores
Rincon, AM; Vincenzi, AMR; Faria, JP;

Publicação
2025 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS, ICSTW

Abstract
This study explores prompt engineering for automated white-box integration testing of RESTful APIs using Large Language Models (LLMs). Four versions of prompts were designed and tested across three OpenAI models (GPT-3.5 Turbo, GPT-4 Turbo, and GPT-4o) to assess their impact on code coverage, token consumption, execution time, and financial cost. The results indicate that different prompt versions, especially with more advanced models, achieved up to 90% coverage, although at higher costs. Additionally, combining test sets from different models increased coverage, reaching 96% in some cases. We also compared the results with EvoMaster, a specialized tool for generating tests for REST APIs, where LLM-generated tests achieved comparable or higher coverage in the benchmark projects. Despite higher execution costs, LLMs demonstrated superior adaptability and flexibility in test generation.

2025

Automated Social Media Feedback Analysis for Software Requirements Elicitation: A Case Study in the Streaming Industry

Autores
Silva, M; Faria, JP;

Publicação
Proceedings of the 20th International Conference on Evaluation of Novel Approaches to Software Engineering, ENASE 2025, Porto, Portugal, April 4-6, 2025.

Abstract

2025

Automatic Generation of Loop Invariants in Dafny with Large Language Models

Autores
Faria, JP; Trigo, E; Abreu, R;

Publicação
FUNDAMENTALS OF SOFTWARE ENGINEERING, FSEN 2025

Abstract
Recent verification tools aim to make formal verification more accessible for software engineers by automating most of the verification process. However, the manual work and expertise required to write verification helper code, such as loop invariants and auxiliary lemmas and assertions, remains a barrier. This paper explores the use of Large Language Models (LLMs) to automate the generation of loop invariants for programs in Dafny. We tested the approach on a curated dataset of 100 programs in Dafny involving arrays, strings, and numeric types. Using a multimodel approach that combines GPT-4o and Claude 3.5 Sonnet, correct loop invariants (passing the Dafny verifier) were generated at the first attempt for 92% of the programs, and in at most five attempts for 95% of the programs. Additionally, we developed an extension to the Dafny plugin for Visual Studio Code to incorporate automatic loop invariant generation into the IDE. Our work stands out from related approaches by handling a broader class of problems and offering IDE integration.

2025

Acceptance Test Generation with Large Language Models: An Industrial Case Study

Autores
Ferreira, M; Viegas, L; Faria, JP; Lima, B;

Publicação
2025 IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATION OF SOFTWARE TEST, AST

Abstract
Large language model (LLM)-powered assistants are increasingly used for generating program code and unit tests, but their application in acceptance testing remains underexplored. To help address this gap, this paper explores the use of LLMs for generating executable acceptance tests for web applications through a two-step process: (i) generating acceptance test scenarios in natural language (in Gherkin) from user stories, and (ii) converting these scenarios into executable test scripts (in Cypress), knowing the HTML code of the pages under test. This two-step approach supports acceptance test-driven development, enhances tester control, and improves test quality. The two steps were implemented in the AutoUAT and Test Flow tools, respectively, powered by GPT-4 Turbo, and integrated into a partner company's workflow and evaluated on real-world projects. The users found the acceptance test scenarios generated by AutoUAT helpful 95% of the time, even revealing previously overlooked cases. Regarding Test Flow, 92% of the acceptance test cases generated by Test Flow were considered helpful: 60% were usable as generated, 8% required minor fixes, and 24% needed to be regenerated with additional inputs; the remaining 8% were discarded due to major issues. These results suggest that LLMs can, in fact, help improve the acceptance test process, with appropriate tooling and supervision.

2024

Report from the 14th International Workshop on Automating Test Case Design, Selection, and Evaluation (A-TEST 2023)

Autores
Faria, JP; Verbeek, F; Fasolino, AR;

Publicação
ACM SIGSOFT Softw. Eng. Notes

Abstract

Teses
supervisionadas

2023

Assessing Accuracy of Low Cost Sensors in Sign Language Recognition

Autor
Daniel Lima Fernandes Vieira

Instituição
UP-FEUP

2023

Adoption of a BDD Framework and its Guidelines

Autor
João Renato da Costa Pinto

Instituição
UP-FEUP

2023

Task Prediction and Planning Tool for Complex Engineering Tasks

Autor
Afonso Maria Rebordão Caiado de Sousa

Instituição
UP-FEUP

2022

Low-Code Data Model Designer

Autor
Ana Isabel Ferreira Maia

Instituição
UP-FEUP

2022

Integration of Fraud Detection Services in Payment Processing Systems

Autor
Ana Margarida Ruivo Loureiro

Instituição
UP-FEUP