2021
Autores
Fonseca, F; Nunes, B; Salgado, M; Cunha, A;
Publicação
Procedia Computer Science
Abstract
Capsule endoscopy made it possible to observe the inner lumen of the small bowel, but with the cost of a longer duration to process its resulting videos. Therefore, the scientific community has developed several machine learning strategies to help in detecting abnormalities in these videos. The published algorithms are typically trained and evaluated on small sets of images, ultimately not proving to be efficient when applied to full videos. In this experiment, we explored the problem of abnormality classification within an unbalanced dataset of images extracted from video capsule endoscopies, based on a vector feature extracted from the deepest layer of pre-trained Convolution Neural Networks to evaluate the impact of transfer learning with a small number of samples. The results showed that there is a reliable model on the classification task using small portions of data from video capsule endoscopies.
2021
Autores
Kazwiny, Y; Pedrosa, J; Zhang, ZQ; Boesmans, W; D'hooge, J; Vanden Berghe, P;
Publicação
SCIENTIFIC REPORTS
Abstract
Ca2+ imaging is a widely used microscopy technique to simultaneously study cellular activity in multiple cells. The desired information consists of cell-specific time series of pixel intensity values, in which the fluorescence intensity represents cellular activity. For static scenes, cellular signal extraction is straightforward, however multiple analysis challenges are present in recordings of contractile tissues, like those of the enteric nervous system (ENS). This layer of critical neurons, embedded within the muscle layers of the gut wall, shows optical overlap between neighboring neurons, intensity changes due to cell activity, and constant movement. These challenges reduce the applicability of classical segmentation techniques and traditional stack alignment and regions-of-interest (ROIs) selection workflows. Therefore, a signal extraction method capable of dealing with moving cells and is insensitive to large intensity changes in consecutive frames is needed. Here we propose a b-spline active contour method to delineate and track neuronal cell bodies based on local and global energy terms. We develop both a single as well as a double-contour approach. The latter takes advantage of the appearance of GCaMP expressing cells, and tracks the nucleus' boundaries together with the cytoplasmic contour, providing a stable delineation of neighboring, overlapping cells despite movement and intensity changes. The tracked contours can also serve as landmarks to relocate additional and manually-selected ROIs. This improves the total yield of efficacious cell tracking and allows signal extraction from other cell compartments like neuronal processes. Compared to manual delineation and other segmentation methods, the proposed method can track cells during large tissue deformations and high-intensity changes such as during neuronal firing events, while preserving the shape of the extracted Ca2+ signal. The analysis package represents a significant improvement to available Ca2+ imaging analysis workflows for ENS recordings and other systems where movement challenges traditional Ca2+ signal extraction workflows.
2021
Autores
Costa, P; Nogueira, AR; Gama, J;
Publicação
PROGRESS IN ARTIFICIAL INTELLIGENCE (EPIA 2021)
Abstract
This work aims to develop a Machine Learning framework to predict voting behaviour. Data resulted from longitudinally collected variables during the Portuguese 2019 general election campaign. Naive Bayes (NB), and Tree Augmented Naive Bayes (TAN) and three different expert models using Dynamic Bayesian Networks (DBN) predict voting behaviour systematically for each moment in time considered using past information. Even though the differences found in some performance comparisons are not statistically significant, TAN and NB outperformed DBN experts' models. The learned models outperformed one of the experts' models when predicting abstention and two when predicting right-wing parties vote. Specifically, for the right-wing parties vote, TAN and NB presented satisfactory accuracy, while the experts' models were below 50% in the third evaluation moment.
2021
Autores
Costa, DG; Vasques, F; Portugal, P;
Publicação
2021 IEEE INTERNATIONAL SMART CITIES CONFERENCE (ISC2)
Abstract
Emergency vehicles have been employed in rescue operations and supportive services, attending victims and managing critical situations in smart cities. Such vehicles, notably ambulances, fire trucks, police cars and transit agents vehicles, may be tracked and monitored in some applications for different functions. When such emergency vehicles are not equipped with GPS receivers, cameras can be used to view emergency signs printed on them, allowing indirect identification of emergency vehicles, although many complexities have to be considered when performing visual sensors-based tracking and monitoring. In this context, this paper proposes a mathematical model focused on the evaluation of the coverage efficiency of a group of visual sensors over moving vehicles, aimed at visual coverage of emergency signs. For that, vehicles, emergency signs and visual sensors are mathematically modelled in this paper, with coverage interactions among these elements being computed based on proposed geometry equations and algorithms. Doing so, the effectiveness of the positioning and configurations of visual sensors can be evaluated without requiring actual deployment, potentially reducing costs when assessing visual monitoring systems in this scenario.
2021
Autores
Silva, B; Sousa, JJ; Lazecky, M; Cunha, A;
Publicação
Procedia Computer Science
Abstract
The success achieved by using SAR data in the study of the Earth led to a firm commitment from space agencies to develop more and better space-borne SAR sensors. This involvement of the space agencies makes us believe that it is possible to increase the potential of SAR interferometry (InSAR) to near real-time monitoring. Among this ever-increasing number of sensors, the ESA's Sentinel-1 (C-band) mission stands out and appears to be disruptive. This mission is acquiring vast volumes of data making current analyzing approaches inviable. This amount of data can no longer be analyzed and studied using classic methods raising the need to use and create new techniques. We believe that Machine Learning techniques can be the solution to overcome this issue since they allow to train Deep Learning models to automate human processes for a vast volume of data. In this paper, we use deep learning models to automatically find and locate deformation areas in InSAR interferograms without atmospheric correction. We train three state-of-the-art classification models for detection deformation areas, achieving an AUC of 0.864 for the best model (VGG19 for wrapped interferograms). Additionally, we use the same models as encoders to train U-net models, achieving a Dice score of 0.54 for InceptionV3. It is necessary more data to achieve better results in segmentation.
2021
Autores
Jabbar, MA; Prasad, KMVV; Peng, SL; Reaz, MBI; Madureira, A;
Publicação
Machine Learning Methods for Signal, Image and Speech Processing
Abstract
The signal processing (SP) landscape has been enriched by recent advances in artificial intelligence (AI) and machine learning (ML), yielding new tools for signal estimation, classification, prediction, and manipulation. Layered signal representations, nonlinear function approximation and nonlinear signal prediction are now feasible at very large scale in both dimensionality and data size. These are leading to significant performance gains in a variety of long-standing problem domains like speech and Image analysis. As well as providing the ability to construct new classes of nonlinear functions (e.g., fusion, nonlinear filtering). This book will help academics, researchers, developers, graduate and undergraduate students to comprehend complex SP data across a wide range of topical application areas such as social multimedia data collected from social media networks, medical imaging data, data from Covid tests etc. This book focuses on AI utilization in the speech, image, communications and yirtual reality domains. © 2021 River Publishers. All rights reserved.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.