2024
Autores
Ferreira V.R.S.; de Paiva A.C.; Silva A.C.; de Almeida J.D.S.; Junior G.B.; Renna F.;
Publicação
International Conference on Enterprise Information Systems, ICEIS - Proceedings
Abstract
This work proposes the use of a deep learning-based adversarial diffusion model to address the translation of contrast-enhanced from non-contrast-enhanced computed tomography (CT) images of the heart. The study overcomes challenges in medical image translation by combining concepts from generative adversarial networks (GANs) and diffusion models. Results were evaluated using the Peak signal to noise ratio (PSNR) and structural index similarity (SSIM) to demonstrate the model's effectiveness in generating contrast images while preserving quality and visual similarity. Despite successes, Root Mean Square Error (RMSE) analysis indicates persistent challenges, highlighting the need for continuous improvements. The intersection of GANs and diffusion models promises future advancements, significantly contributing to clinical practice. The table compares CyTran, CycleGAN, and Pix2Pix networks with the proposed model, indicating directions for improvement.
2020
Autores
Ferreira, AC; Silva, LR; Renna, F; Brandl, HB; Renoult, JP; Farine, DR; Covas, R; Doutrelant, C;
Publicação
METHODS IN ECOLOGY AND EVOLUTION
Abstract
Individual identification is a crucial step to answer many questions in evolutionary biology and is mostly performed by marking animals with tags. Such methods are well-established, but often make data collection and analyses time-consuming, or limit the contexts in which data can be collected. Recent computational advances, specifically deep learning, can help overcome the limitations of collecting large-scale data across contexts. However, one of the bottlenecks preventing the application of deep learning for individual identification is the need to collect and identify hundreds to thousands of individually labelled pictures to train convolutional neural networks (CNNs). Here we describe procedures for automating the collection of training data, generating training datasets, and training CNNs to allow identification of individual birds. We apply our procedures to three small bird species, the sociable weaverPhiletairus socius,the great titParus majorand the zebra finchTaeniopygia guttata, representing both wild and captive contexts. We first show how the collection of individually labelled images can be automated, allowing the construction of training datasets consisting of hundreds of images per individual. Second, we describe how to train a CNN to uniquely re-identify each individual in new images. Third, we illustrate the general applicability of CNNs for studies in animal biology by showing that trained CNNs can re-identify individual birds in images collected in contexts that differ from the ones originally used to train the CNNs. Finally, we present a potential solution to solve the issues of new incoming individuals. Overall, our work demonstrates the feasibility of applying state-of-the-art deep learning tools for individual identification of birds, both in the laboratory and in the wild. These techniques are made possible by our approaches that allow efficient collection of training data. The ability to conduct individual recognition of birds without requiring external markers that can be visually identified by human observers represents a major advance over current methods.
2023
Autores
Pedrosa, J; Silva, R; Santos, C; Nunes, F; Mancio, J; Renna, F; Fontes Carvalho, R;
Publicação
European Heart Journal - Cardiovascular Imaging
Abstract
2023
Autores
Barbosa, M; Renna, F; Dourado, N; Costa, R;
Publicação
Studies in Computational Intelligence
Abstract
This paper proposes a tool that extracts data from computational tomography (CT) scans of long bones, applies filters to allow a distinction between cortical and cancellous tissue, and converts the tissues into a three-dimensional (3D) model that can be used to generate finite element meshes. In order to identify the best segmentation technique for the problem under study, cortical, cancellous and medulla tissue segmentation was tested based on image histogram information, simple Hounsfield scale (HU) information, HU scale information with morphological operator filters, and active contour methods (active contour, random walker segmentation and findContours). These segmentations were evaluated qualitatively through a visual comparison and quantitatively through the calculation of the Dice Coefficient (DICE) and Mean-Squared Error (MSE) parameters. The developed algorithm presents a Dice higher than 0.95 and a MSE lower than 0.01 for cortical tissue segmentation, which allows it to be used as a bone characterization method. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023.
2023
Autores
Gaudio, A; Giordano, N; Coimbra, MT; Kjaergaard, B; Schmidt, SE; Renna, F;
Publicação
Computing in Cardiology, CinC 2023, Atlanta, GA, USA, October 1-4, 2023
Abstract
2023
Autores
Martins, ML; Pedroso, M; Libânio, D; Dinis Ribeiro, M; Coimbra, M; Renna, F;
Publicação
2023 45TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY, EMBC
Abstract
Gastric Intestinal Metaplasia (GIM) is one of the precancerous conditions in the gastric carcinogenesis cascade and its optical diagnosis during endoscopic screening is challenging even for seasoned endoscopists. Several solutions leveraging pre-trained deep neural networks (DNNs) have been recently proposed in order to assist human diagnosis. In this paper, we present a comparative study of these architectures in a new dataset containing GIM and non-GIM Narrow-band imaging still frames. We find that the surveyed DNNs perform remarkably well on average, but still measure sizeable interfold variability during cross-validation. An additional ad-hoc analysis suggests that these baseline architectures may not perform equally well at all scales when diagnosing GIM.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.