2023
Authors
Silvano, P; Amorim, E; Leal, A; Cantante, I; Silva, F; Jorge, A; Campos, R; Nunes, S;
Publication
Text2Story@ECIR
Abstract
News articles typically include reporting events to inform on what happened. These reporting events are not part of the story being told but are nonetheless a relevant part of the news and can pose a challenge to the computational processing of news narratives. They compose a reporting narrative, which is the present study's focus. This paper aims to demonstrate through selected use cases how a comprehensive annotation scheme with suitable tags and links can properly represent the reporting events and the way they relate to the events that make the story. In addition, we put forward a proposal for their visual representation that enables a systematic and detailed analysis of the importance of reporting events in the news structure. Finally, we describe some lexico-grammatical features of reporting events, which can contribute to their automatic detection.
2023
Authors
Queiroz, PGG; Rodrigues, LCC; Fernandes, SR;
Publication
Anais do XXIX Workshop de Informática na Escola (WIE 2023)
Abstract
2023
Authors
Marín, B; Vos, TEJ; Snoeck, M; Paiva, ACR; Fasolino, AR;
Publication
CAiSE Research Projects Exhibition
Abstract
The significance of software testing cannot be overstated, as its poor implementation often leads to problematic and faulty software applications. This problem comes from a mismatch in the required industry skills, the learning requirements of students, and the current teaching methodology for testing in higher and vocational education institutes. This project aims to create seamless teaching materials for testing education that is in line with industry standards and learning needs. Considering the diverse socioeconomic environment that will benefit from this project, a consortium of partners ranging from universities to small businesses has been assembled. The project starts with research into sense-making and cognitive models for learning and doing testing. Additionally, a study will be conducted to identify the training and knowledge transfer requirements for testing within the industry. Based on the research findings and study outcomes, teaching capsules for software testing will be developed, taking into account the cognitive models of students and the needs of the industry. After the effectiveness validation of these capsules, these capsules and the instructional material will be available to other researchers and professors to improve testing education.
2023
Authors
Neto, A; Couto, D; Coimbra, MT; Cunha, A;
Publication
VISIGRAPP (4: VISAPP)
Abstract
Colorectal cancer is the third most common cancer and the second cause of cancer-related deaths in the world. Colonoscopic surveillance is extremely important to find cancer precursors such as adenomas or serrated polyps. Identifying small or flat polyps can be challenging during colonoscopy and highly dependent on the colonoscopist's skills. Deep learning algorithms can enable improvement of polyp detection rate and consequently assist to reduce physician subjectiveness and operation errors. This study aims to compare YOLO object detection architecture with self-attention models. In this study, the Kvasir-SEG polyp dataset, composed of 1000 colonoscopy annotated still images, were used to train (700 images) and validate (300images) the performance of polyp detection algorithms. Well-defined architectures such as YOLOv4 and different YOLOv5 models were compared with more recent algorithms that rely on self-attention mechanisms, namely the DETR model, to understand which technique can be more helpful and reliable in clinical practice. In the end, the YOLOv5 proved to be the model achieving better results for polyp detection with 0.81 mAP, however, the DETR had 0.80 mAP proving to have the potential of reaching similar performances when compared to more well-established architectures.
2023
Authors
Rodrigues, L; Magalhaes, SA; da Silva, DQ; dos Santos, FN; Cunha, M;
Publication
AGRONOMY-BASEL
Abstract
The efficiency of agricultural practices depends on the timing of their execution. Environmental conditions, such as rainfall, and crop-related traits, such as plant phenology, determine the success of practices such as irrigation. Moreover, plant phenology, the seasonal timing of biological events (e.g., cotyledon emergence), is strongly influenced by genetic, environmental, and management conditions. Therefore, assessing the timing the of crops' phenological events and their spatiotemporal variability can improve decision making, allowing the thorough planning and timely execution of agricultural operations. Conventional techniques for crop phenology monitoring, such as field observations, can be prone to error, labour-intensive, and inefficient, particularly for crops with rapid growth and not very defined phenophases, such as vegetable crops. Thus, developing an accurate phenology monitoring system for vegetable crops is an important step towards sustainable practices. This paper evaluates the ability of computer vision (CV) techniques coupled with deep learning (DL) (CV_DL) as tools for the dynamic phenological classification of multiple vegetable crops at the subfield level, i.e., within the plot. Three DL models from the Single Shot Multibox Detector (SSD) architecture (SSD Inception v2, SSD MobileNet v2, and SSD ResNet 50) and one from You Only Look Once (YOLO) architecture (YOLO v4) were benchmarked through a custom dataset containing images of eight vegetable crops between emergence and harvest. The proposed benchmark includes the individual pairing of each model with the images of each crop. On average, YOLO v4 performed better than the SSD models, reaching an F1-Score of 85.5%, a mean average precision of 79.9%, and a balanced accuracy of 87.0%. In addition, YOLO v4 was tested with all available data approaching a real mixed cropping system. Hence, the same model can classify multiple vegetable crops across the growing season, allowing the accurate mapping of phenological dynamics. This study is the first to evaluate the potential of CV_DL for vegetable crops' phenological research, a pivotal step towards automating decision support systems for precision horticulture.
2023
Authors
Nunes, A; Matos, A;
Publication
JOURNAL OF MARINE SCIENCE AND ENGINEERING
Abstract
Nowadays, semantic segmentation is used increasingly often in exploration by underwater robots. For example, it is used in autonomous navigation so that the robot can recognise the elements of its environment during the mission to avoid collisions. Other applications include the search for archaeological artefacts, the inspection of underwater structures or in species monitoring. Therefore, it is necessary to improve the performance in these tasks as much as possible. To this end, we compare some methods for image quality improvement and data augmentation and test whether higher performance metrics can be achieved with both strategies. The experiments are performed with the SegNet implementation and the SUIM dataset with eight common underwater classes to compare the obtained results with the already known ones. The results obtained with both strategies show that they are beneficial and lead to better performance results by achieving a mean IoU of 56% and an increased overall accuracy of 81.8%. The result for the individual classes shows that there are five classes with an IoU value close to 60% and only one class with an IoU value less than 30%, which is a more reliable result and is easier to use in real contexts.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.