2020
Autores
Lima, B; Faria, JP; Hierons, R;
Publicação
IEEE ACCESS
Abstract
Evermore end-to-end digital services depend on the proper interoperation of multiple products, forming a distributed system, often subject to timing requirements. To ensure interoperability and the timely behavior of such systems, it is important to conduct integration tests that verify the interactions with the environment and between the system components in key scenarios. The automation of such integration tests requires that test components are also distributed, with local testers deployed close to the system components, coordinated by a central tester. Test coordination in such a test architecture is a big challenge. To address it, in this article we propose an approach based on the pre-processing of the test scenarios. We first analyze the test scenarios in order to check if conformance errors can be detected locally (local observability) and test inputs can be decided locally (local controllability) by the local testers for the test scenario under consideration, without the need for exchanging coordination messages between the test components during test execution. If such properties do not hold, we next try to determine a minimum set of coordination messages or time constraints to be attached to the given test scenario to enforce those properties and effectively solve the test coordination problem with minimal overhead. The analysis and enforcement procedures were implemented in the DCO Analyzer tool for test scenarios described by means of UML sequence diagrams. Since many local observability and controllability problems may be caused by design flaws or incomplete specifications, and multiple ways may exist to enforce local observability and controllability, the tool was designed as a static analysis assistant to be used before test execution. DCO Analyzer was able to correctly identify local observability and controllability problems in real-world scenarios and help the users fix the detected problems.
2020
Autores
Raza, M; Faria, JP;
Publicação
IEEE ACCESS
Abstract
High-maturity software development processes and development environments with automated data collection can generate significant amounts of data that can be periodically analyzed to identify performance problems, determine their root causes, and devise improvement actions. However, conducting the analysis manually is challenging because of the potentially large amount of data to analyze, the effort and expertise required, and the lack of benchmarks for comparison. In this article, we present ProcessPAIR, a novel method with tool support designed to help developers analyze their performance data with higher quality and less effort. Based on performance models structured manually by process experts and calibrated automatically from the performance data of many process users, it automatically identifies and ranks performance problems and potential root causes of individual subjects, so that subsequent manual analysis for the identification of deeper causes and improvement actions can be appropriately focused. We also show how ProcessPAIR was successfully instantiated and used in software engineering education and training, helping students analyze their performance data with higher satisfaction (by 25%), better quality of analysis outcomes (by 7%), and lower effort (by 4%), as compared to a traditional approach (with reduced tool support).
2020
Autores
Lima, B; Faria, JP;
Publicação
Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering: Companion Proceedings
Abstract
2020
Autores
Lima, B; Faria, JP;
Publicação
2020 ACM/IEEE 42ND INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: COMPANION PROCEEDINGS (ICSE-COMPANION 2020)
Abstract
To ensure interoperability and the correct behavior of heterogeneous distributed systems in key scenarios, it is important to conduct automated integration tests, based on distributed test components (called local testers) that are deployed close to the system components to simulate inputs from the environment and monitor the interactions with the environment and other system components. We say that a distributed test scenario is locally controllable and locally observable if test inputs can be decided locally and conformance errors can be detected locally by the local testers, without the need for exchanging coordination messages between the test components during test execution (which may reduce the responsiveness and fault detection capability of the test harness). DCO Analyzer is the first tool that checks if distributed test scenarios specified by means of UML sequence diagrams exhibit those properties, and automatically determines a minimum number of coordination messages to enforce them. The demo video for DCO Analyzer can be found at https://youtu.be/LVIusK36_bs.
2020
Autores
Goncalves, GM; Meneses, R; Faria, JP; Vidal, RM;
Publicação
PROCEEDINGS OF THE 2020 IEEE GLOBAL ENGINEERING EDUCATION CONFERENCE (EDUCON 2020)
Abstract
Over the past decades, software engineering has reached a level of maturity which entails great challenges in its education. Universities must prepare students to real-life challenges by offering courses to aid students in developing several vital skills which go beyond hard skills (e.g., communication skills and self-management). At the Faculty of Engineering of the University of Porto, a pioneering course, dubbed Project Management Laboratory, offers the proper environment for students to develop such skills by inviting industry to be closely involved in the education of the students. This course integrates practice and theory in a setting close to what the students will face when they move into industry. This paper reports on the experience, results, and benefits of this innovative course.
2020
Autores
Lima, B; Faria, JP;
Publicação
CoRR
Abstract
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.