2020
Autores
Zibaii, MI; Layeghi, A; Dargahi, L; Haghparast, A; Frazao, O;
Publicação
Journal of Science and Technological Researches
Abstract
2020
Autores
Campaniço, AT; Khanal, SR; Paredes, H; Filipe, V;
Publicação
TECH-EDU
Abstract
In the competitive automotive market, where extremely high-quality standards must be ensured independently of the growing product and manufacturing complexity brought by customization, reliable and precise detection of any non-conformities before the vehicle leaves the assembly line is paramount. In this paper we propose a wearable solution to aid quality control workers in the detection, visualization and relay of any non-conformities, while also reducing known performance issues such as skill gaps and fatigue, and improving training methods. We also explore how the reliability, precision and validity tests of the visualization module of our framework were performed, guaranteeing a 0% chance occurrence of undesired non-conformities in the following usability tests and training simulator.
2020
Autores
Medeiros, FSB; Simonetto, EdO; Castro, HCGAd;
Publicação
Revista de Gestão dos Países de Língua Portuguesa
Abstract
2020
Autores
Baptista, Ana Alice; Branco, Pedro; Azevedo, Bruno; Oliveira e Sá, Jorge; Ribeiro, Ana Carolina Freitas; Malta, Mariana Curado;
Publicação
Abstract
Over 2.5 million scientific articles are published annually, totaling 6,849.32 per day in 2015; in 2018 this value was increased to over 3 million articles, totaling 8.219,18 per day [1]. Thus, finding the most relevant Research Outputs (ROs), such as articles, theses, patents, among others, is increasingly difficult due, in part, to the existing interfaces returning massive lists of results.
The project aims to develop and test a platform that incorporates social data for capturing various usage metrics to define a new metric that we call Social Scholarly Experience Metrics (SSEM) and a new visualization technique that, jointly, will support the fast access to find relevant ROs.
2020
Autores
Victorino, G; Braga, R; Santos Victor, J; Lopes, CM;
Publicação
OENO ONE
Abstract
Forecasting vineyard yield with accuracy is one of the most important trends of research in viticulture today. Conventional methods for yield forecasting are manual, require a lot of labour and resources and are often destructive. Recently, image-analysis approaches have been explored to address this issue. Many of these approaches encompass cameras deployed on ground platforms that collect images in proximal range, on-the-go. As the platform moves, yield components and other image-based indicators are detected and counted to perform yield estimations. However, in most situations, when image acquisition is done in non-disturbed canopies, a high fraction of yield components is occluded. The present work's goal is twofold. Firstly, to evaluate yield components' visibility in natural conditions throughout the grapevine's phenological stages. Secondly, to explore single bunch images taken in lab conditions to obtain the best visible bunch attributes to use as yield indicators. In three vineyard plots of red (Syrah) and white varieties (Arinto and Encruzado), several canopy 1 m segments were imaged using the robotic platform Vinbot. Images were collected from winter bud stage until harvest and yield components were counted in the images as well as in the field. At pea-sized berries, veraison and full maturation stages, a bunch sample was collected and brought to lab conditions for detailed assessments at a bunch scale. At early stages, all varieties showed good visibility of spurs and shoots, however, the number of shoots was only highly and significantly correlated with the yield for the variety Syrah. Inflorescence and bunch occlusion reached high percentages, above 50 %. In lab conditions, among the several bunch attributes studied, bunch volume and bunch projected area showed the highest correlation coefficients with yield. In field conditions, using non-defoliated vines, the bunch projected area of visible bunches presented high and significant correlation coefficients with yield, regardless of the fruit's occlusion. Our results show that counting yield components with image analysis in non-defoliated vines may be insufficient for accurate yield estimation. On the other hand, using bunch projected area as a predictor can be the best option to achieve that goal, even with high levels of occlusion.
2020
Autores
Saraiva, AA; Santos, DBS; Francisco, AA; Sousa, JVM; Ferreira, NMF; Soares, S; Valente, A;
Publicação
PROCEEDINGS OF THE 13TH INTERNATIONAL JOINT CONFERENCE ON BIOMEDICAL ENGINEERING SYSTEMS AND TECHNOLOGIES, VOL 3: BIOINFORMATICS
Abstract
Noting recent advances in the field of image classification, where convolutional neural networks (CNNs) are used to classify images with high precision. This paper proposes a method of classifying breathing sounds using CNN, where it is trained and tested. To do this, a visual representation of each audio sample was made that allows identifying resources for classification, using the same techniques used to classify images with high precision.For this we used the technique known as Mel Frequency Cepstral Coefficients (MFCCs). For each audio file in the dataset, we extracted resources with MFCC which means we have an image representation for each audio sample. The method proposed in this article obtained results above 74%, in the classification of respiratory sounds used in the four classes available in the database used (Normal, crackles, wheezes, Both).
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.