Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About
Download Photo HD

About

João Pascoal Faria holds a PhD in Electrical and Computer Engineering from the Faculty of Engineering of the University of Porto in 1999, where he is currently Associate Professor at the Department of Informatics Engineering and Director of the Integrated Master in Informatics and Computing Engineering (MIEIC). He his a member of the Software Engineering Research Group (softeng.fe.up.pt) and researcher at INESC TEC, where he coordinates the Software Engineering area. He represents FEUP and INESC TEC in the Technical Comission for Health Informatics (CT 199) and FEUP as President of the Sectorial Comission for the Quality of Information and Communications Technology (CS/03), in the scope of the Portuguese Quality Institute (IPQ). In the past, he worked with several software companies (Novabase Saúde, Sidereus, Medidata) and was a co-founder of two other (Qualisoft and Strongstep). He has more than 25 years of experience in education, research, development and consultancy in several software engineering areas. He is the main author of a rapid application development tool (SAGA), based on domain specific languages, with more than 25 years of market presence and evolution (1989-present). He is currently involved in research projects, supervisions and consulting activities in the areas of model-based testing, software process improvement and model-driven development.

Interest
Topics
Details

Details

  • Name

    João Pascoal Faria
  • Cluster

    Computer Science
  • Role

    Senior Researcher
  • Since

    14th October 1985
002
Publications

2021

An analysis of Monte Carlo simulations for forecasting software projects

Authors
Miranda, P; Faria, JP; Correia, FF; Fares, A; Graça, R; Moreira, JM;

Publication
SAC '21: The 36th ACM/SIGAPP Symposium on Applied Computing, Virtual Event, Republic of Korea, March 22-26, 2021

Abstract
Forecasts of the effort or delivery date can play an important role in managing software projects, but the estimates provided by development teams are often inaccurate and time-consuming to produce. This is not surprising given the uncertainty that underlies this activity. This work studies the use of Monte Carlo simulations for generating forecasts based on project historical data. We have designed and run experiments comparing these forecasts against what happened in practice and to estimates provided by developers, when available. Comparisons were made based on the mean magnitude of relative error (MMRE). We did also analyze how the forecasting accuracy varies with the amount of work to be forecasted and the amount of historical data used. To minimize the requirements on input data, delivery date forecasts for a set of user stories were computed based on takt time of past stories (time elapsed between the completion of consecutive stories); effort forecasts were computed based on full-time equivalent (FTE) hours allocated to the implementation of past stories. The MMRE of delivery date forecasting was 32% in a set of 10 runs (for different projects) of Monte Carlo simulation based on takt time. The MMRE of effort forecasting was 20% in a set of 5 runs of Monte Carlo simulation based on FTE allocation, much smaller than the MMRE of 134% of developers' estimates. A better forecasting accuracy was obtained when the number of historical data points was 20 or higher. These results suggest that Monte Carlo simulations may be used in practice for delivery date and effort forecasting in agile projects, after a few initial sprints. © 2021 ACM.

2021

An analysis of the state of the art of machine learning for risk assessment in software projects

Authors
Sousa A.; Faria J.P.; Mendes-Moreira J.;

Publication
Proceedings of the International Conference on Software Engineering and Knowledge Engineering, SEKE

Abstract
Risk management is one of the ten knowledge areas discussed in the Project Management Body of Knowledge (PMBOK), which serves as a guide that should be followed to increase the chances of project success. The popularity of research regarding the application of risk management in software projects has been consistently growing in recent years, particularly with the application of machine learning techniques to help identify risk levels or risk factors of a project before the project development begins, with the intent of improving the likelihood of success of software projects. This paper provides an overview of various concepts related to risk and risk management in software projects, including traditional techniques used to identify and control risks in software projects, as well as machine learning techniques and methods which have been applied to provide better estimates and classification of the risk levels and risk factors that can be encountered during the development of a software project. The paper also presents an analysis of machine learning oriented risk management studies and experiments found in the literature as a way of identifying the type of inputs and outputs, as well as frequent algorithms used in this research area.

2020

Visual Self-healing Modelling for Reliable Internet-of-Things Systems

Authors
Dias, JP; Lima, B; Faria, JP; Restivo, A; Ferreira, HS;

Publication
Lecture Notes in Computer Science - Computational Science – ICCS 2020

Abstract

2020

Local Observability and Controllability Analysis and Enforcement in Distributed Testing with Time Constraints

Authors
Lima, B; Faria, JP; Hierons, R;

Publication
IEEE Access

Abstract

2020

The ProcessPAIR Method for Automated Software Process Performance Analysis

Authors
Raza, M; Faria, JP;

Publication
IEEE ACCESS

Abstract
High-maturity software development processes and development environments with automated data collection can generate significant amounts of data that can be periodically analyzed to identify performance problems, determine their root causes, and devise improvement actions. However, conducting the analysis manually is challenging because of the potentially large amount of data to analyze, the effort and expertise required, and the lack of benchmarks for comparison. In this article, we present ProcessPAIR, a novel method with tool support designed to help developers analyze their performance data with higher quality and less effort. Based on performance models structured manually by process experts and calibrated automatically from the performance data of many process users, it automatically identifies and ranks performance problems and potential root causes of individual subjects, so that subsequent manual analysis for the identification of deeper causes and improvement actions can be appropriately focused. We also show how ProcessPAIR was successfully instantiated and used in software engineering education and training, helping students analyze their performance data with higher satisfaction (by 25%), better quality of analysis outcomes (by 7%), and lower effort (by 4%), as compared to a traditional approach (with reduced tool support).

Supervised
thesis

2021

Mock Testing Framework for a Fire Detection System

Author
Guilherme de Castro Oliveira

Institution
UP-FEUP

2021

Increasing the Dependability of Internet-of-Things Systems in the context of End-User Development Environments

Author
João Pedro Matos Teixeira Dias

Institution
UP-FEUP

2021

Observability and Controllability in Scenario-based Integration Testing of Time-Constrained Distributed Systems

Author
Bruno Miguel Carvalhido Lima

Institution
UP-FEUP

2021

Assessing Risks in Software Projects Through Machine Learning Approaches

Author
André Oliveira Sousa

Institution
UP-FEUP

2021

Tool for Incremental Database Migration

Author
Fernando Jorge Coelho Barreira Calheiros Alves

Institution
UP-FEUP