Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

2024

Best practices for business process automation description - a case study

Autores
Silvares, C; Sao Mamede, H; Costa, J;

Publicação
ENTERPRISE INFORMATION SYSTEMS

Abstract
Organizations in competitive, regulated environments must enhance business processes for efficiency, quality, and compliance while minimizing risks and costs. Process automation solutions play a vital role in achieving these goals, though the variety of tool descriptions creates challenges for compatibility and interoperability. This hinders innovation and competitiveness. The adoption of standard specifications or widely accepted best practices for automation descriptions offers a solution. This research aims to identify a set of best practices to guide process-oriented organizations in evaluating their current automation practices, ensuring alignment and fostering improvements in business process automation.

2024

Human versus Artificial Intelligence: Validation of a Deep Learning Model for Retinal Layer and Fluid Segmentation in Optical Coherence Tomography Images from Patients with Age-Related Macular Degeneration

Autores
Miranda, M; Santos-Oliveira, J; Mendonca, AM; Sousa, V; Melo, T; Carneiro, A;

Publicação
DIAGNOSTICS

Abstract
Artificial intelligence (AI) models have received considerable attention in recent years for their ability to identify optical coherence tomography (OCT) biomarkers with clinical diagnostic potential and predict disease progression. This study aims to externally validate a deep learning (DL) algorithm by comparing its segmentation of retinal layers and fluid with a gold-standard method for manually adjusting the automatic segmentation of the Heidelberg Spectralis HRA + OCT software Version 6.16.8.0. A total of sixty OCT images of healthy subjects and patients with intermediate and exudative age-related macular degeneration (AMD) were included. A quantitative analysis of the retinal thickness and fluid area was performed, and the discrepancy between these methods was investigated. The results showed a moderate-to-strong correlation between the metrics extracted by both software types, in all the groups, and an overall near-perfect area overlap was observed, except for in the inner segment ellipsoid (ISE) layer. The DL system detected a significant difference in the outer retinal thickness across disease stages and accurately identified fluid in exudative cases. In more diseased eyes, there was significantly more disagreement between these methods. This DL system appears to be a reliable method for accessing important OCT biomarkers in AMD. However, further accuracy testing should be conducted to confirm its validity in real-world settings to ultimately aid ophthalmologists in OCT imaging management and guide timely treatment approaches.

2024

A C Subset for Ergonomic Source-to-Source Analyses and Transformations

Autores
Matos, JN; Bispo, J; Sousa, LM;

Publicação
PROCEEDINGS OF THE RAPIDO 2024 WORKSHOP, HIPEAC 2024

Abstract
Modern compiled software, written in languages such as C, relies on complex compiler infrastructure. However, developing new transformations and improving existing ones can be challenging for researchers and engineers. Often, transformations must be implemented bymodifying the compiler itself, which may not be feasible, for technical or legal reasons. Source-to-source compilers make it possible to directly analyse and transform the original source, making transformations portable across different compilers, and allowing rapid research and prototyping of code transformations. However, this approach has the drawback of exposing the researcher to the full breadth of the source language, which is often more extensive and complex than the IRs used in traditional compilers. In this work, we propose a solution to tame the complexity of the source language and make source-to-source compilers an ergonomic platform for program analysis and transformation. We define a simpler subset of the C language that can implement the same programs with fewer constructs and implement a set of sourceto-source transformations that automatically normalise the input source code into equivalent programs expressed in the proposed subset. Finally, we implement a function inlining transformation that targets the subset as a case study. We show that for this case study, the assumptions afforded by using a simpler language subset greatly improves the number of cases the transformation can be applied, increasing the average success rate from 37%, before normalisation, to 97%, after normalisation. We also evaluate the performance of several benchmarks after applying a naive inlining algorithm, and obtained a 12% performance improvement in certain applications, after compiling with the flag O2, both in Clang and GCC, suggesting there is room for exploring source-level transformations as a complement to traditional compilers.

2024

A Multi-objective Approach for Solving Distributed Job Shop Scheduling Problems

Autores
dos Santos, F; Costa, L; Varela, L;

Publicação
OPTIMIZATION, LEARNING ALGORITHMS AND APPLICATIONS, OL2A 2024, PT I

Abstract
Nowadays, the industrial market is characterised by high levels of competition, with customers increasingly demanding in terms of quality, delivery times, costs, etc.. However, with increasing demand and the need to increase productivity, many companies in recent years have dedicated themselves to decentralising their factories, thus moving to distributed production. Today's manufacturing systems are distributed in the sense that there are several jobs that have to be carry out on machines located in different factories. This paper proposes a multi-objective distributed job shop scheduling model with unrelated parallel machines and sequence-dependent setup times. The transport time of raw materials to carry out a given job to a factory is also taken into account. Small instances of the problem were solved using NSGA-III with the aim of simultaneously minimising two objectives: the makespan and average completion time. Preliminary results show the validity of this approach.

2024

MIMt: a curated 16S rRNA reference database with less redundancy and higher accuracy at species-level identification

Autores
Cabezas, MP; Fonseca, NA; Muñoz-Mérida, A;

Publicação
ENVIRONMENTAL MICROBIOME

Abstract
MotivationAccurate determination and quantification of the taxonomic composition of microbial communities, especially at the species level, is one of the major issues in metagenomics. This is primarily due to the limitations of commonly used 16S rRNA reference databases, which either contain a lot of redundancy or a high percentage of sequences with missing taxonomic information. This may lead to erroneous identifications and, thus, to inaccurate conclusions regarding the ecological role and importance of those microorganisms in the ecosystem.ResultsThe current study presents MIMt, a new 16S rRNA database for archaea and bacteria's identification, encompassing 47 001 sequences, all precisely identified at species level. In addition, a MIMt2.0 version was created with only curated sequences from RefSeq Targeted loci with 32 086 sequences. MIMt aims to be updated twice a year to include all newly sequenced species. We evaluated MIMt against Greengenes, RDP, GTDB and SILVA in terms of sequence distribution and taxonomic assignments accuracy. Our results showed that MIMt contains less redundancy, and despite being 20 to 500 times smaller than existing databases, outperforms them in completeness and taxonomic accuracy, enabling more precise assignments at lower taxonomic ranks and thus, significantly improving species-level identification.

2024

Smart Factories - design and results of a new course in a MSc curriculum of engineering

Autores
Azevedo, A; Almeida, AH;

Publicação
2024 IEEE GLOBAL ENGINEERING EDUCATION CONFERENCE, EDUCON 2024

Abstract
In the Fourth Industrial Revolution era, commonly known as Industry 4.0, the manufacturing industry is undergoing a profound transformation driven by the convergence of technological advancements. Industry 4.0 technologies are revolutionising how products are manufactured, from design to production to delivery. These technologies, such as collaborative robotics, digital twins, IoT, and data analytics, enable manufacturers to improve efficiency, productivity, and quality. As Industry 4.0 continues to evolve, the demand for skilled engineers who can effectively design, implement, and manage these sophisticated systems is growing rapidly. Future mechanical engineers must be prepared to navigate this complex and data-driven manufacturing landscape. To address this need, the Faculty of Engineering at the University of Porto developed a new course titled Smart Factories, specifically designed to equip master's students with the knowledge and skills necessary to thrive in the factories of the future. This course utilises an innovative, active experimental learning methodology with industry collaborations and a comprehensive curriculum to foster the development of the multidisciplinary skills necessary to excel in this rapidly evolving field. Through this comprehensive and innovative approach, the Smart Factories course aims to prepare future mechanical engineers to become leaders in smart manufacturing, driving innovation and shaping future factories.

  • 454
  • 4387