Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

2023

Effectiveness of Secondary Risk-Reducing Strategies in Patients With Unilateral Breast Cancer With Pathogenic Variants of BRCA1 and BRCA2 Subjected to Breast-Conserving Surgery: Evidence-Based Simulation Study (vol 12, e37177, 2022)

Authors
Maksimenko, J; Rodrigues, PP; Nakazawa-Miklasevica, M; Pinto, D; Miklasevics, E; Trofimovics, G; Gardovskis, J; Cardoso, F; Cardoso, MJ;

Publication
JMIR FORMATIVE RESEARCH

Abstract

2023

COMPLEXITY SCALABLE LEARNING-BASED IMAGE DECODING

Authors
Munna, TA; Ascenso, A;

Publication
2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP

Abstract
Recently, learning-based image compression has attracted a lot of attention, leading to the development of a new JPEG AI standard based on neural networks. Typically, this type of coding solution has much lower encoding complexity compared to conventional coding standards such as HEVC and VVC (Intra mode) but has much higher decoding complexity. Therefore, to promote the wide adoption of learning-based image compression, especially to resource-constrained (such as mobile) devices, it is important to achieve lower decoding complexity even if at the cost of some coding efficiency. This paper proposes a complexity scalable decoder that can control the decoding complexity by proposing a novel procedure to learn the filters of the convolutional layers at the decoder by varying the number of channels at each layer, effectively having simple to more complex decoding networks. A regularization loss is employed with pruning after training to obtain a set of scalable layers, which may use more or fewer channels depending on the complexity budget. Experimental results show that complexity can be significantly reduced while still allowing a competitive rate-distortion performance.

2023

Tree Trunks Cross-Platform Detection Using Deep Learning Strategies for Forestry Operations

Authors
da Silva, DQ; dos Santos, FN; Filipe, V; Sousa, AJ;

Publication
ROBOT2022: FIFTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 1

Abstract
To tackle wildfires and improve forest biomass management, cost effective and reliable mowing and pruning robots are required. However, the development of visual perception systems for forestry robotics needs to be researched and explored to achieve safe solutions. This paper presents two main contributions: an annotated dataset and a benchmark between edge-computing hardware and deep learning models. The dataset is composed by nearly 5,400 annotated images. This dataset enabled to train nine object detectors: four SSD MobileNets, one EfficientDet, three YOLO-based detectors and YOLOR. These detectors were deployed and tested on three edge-computing hardware (TPU, CPU and GPU), and evaluated in terms of detection precision and inference time. The results showed that YOLOR was the best trunk detector achieving nearly 90% F1 score and an inference average time of 13.7ms on GPU. This work will favour the development of advanced vision perception systems for robotics in forestry operations.

2023

Tensions in design and participation processes: An ethnographic approach to the design, building and evaluation of a collective intelligence model

Authors
Chaves, R; Motta, C; Correia, A; De Souza, J; Schneider, D;

Publication
Proceedings of the 2023 26th International Conference on Computer Supported Cooperative Work in Design, CSCWD 2023

Abstract

2023

The impact of ground control points for the 3D study of grapevines in steep slope vineyards

Authors
Stolarski, O; Lourenço, JM; Peres, E; Morais, R; Sousa, JJ; Pádua, L;

Publication
CENTERIS 2023 - International Conference on ENTERprise Information Systems / ProjMAN - International Conference on Project MANagement / HCist - International Conference on Health and Social Care Information Systems and Technologies 2023, Porto, Portugal, November 8-10, 2023.

Abstract
Data acquisition through unmanned aerial vehicles (UAVs) has become integral to the study of agricultural crops, especially for multitemporal analyses spanning the entire growing season. Ensuring accurate data alignment is essential not only to maintain data quality but also to leverage the continuous monitoring of the same area over time. Ground control points (GCPs) play a critical role in geolocating UAV data. Their absence can lead to planimetric and altimetric discrepancies, which are particularly impactful in 3D plant-level studies. This study is centered on the examination of misalignment effects in a challenging steep slope vineyard environment and their impacts on 3D alignment accuracy. For this purpose, a UAV equipped with an RGB camera to capture imagery at two distinct flight heights. Various scenarios, each involving a different number of GCPs, were assessed to evaluate their impact on alignment precision. The methodology employed holds potential for assessing geolocation accuracy in complex 3D environments, providing value insights for vineyard monitoring. © 2024 The Author(s). Published by Elsevier B.V.

2023

Addressing Chest Radiograph Projection Bias in Deep Classification Models

Authors
Pereira, SC; Rochal, J; Gaudio, A; Smailagic, A; Campilhol, A; Mendonca, AM;

Publication
MEDICAL IMAGING WITH DEEP LEARNING, VOL 227

Abstract
Deep learning-based models are widely used for disease classification in chest radiographs. This exam can be performed in one of two projections (posteroanterior or anteroposterior), depending on the direction that the X-ray beam travels through the body. Since projection visibly affects the way anatomical structures appear in the scans, it may introduce bias in classifiers, especially when spurious correlations between a given disease and a projection occur. This paper examines the influence of chest radiograph projection on the performance of deep learning-based classification models and proposes an approach to mitigate projection-induced bias. Results show that a DenseNet-121 model is better at classifying images from the most representative projection in the data set, suggesting that projection is taken into account by the classifier. Moreover, this model can classify chest X-ray projection better than any of the fourteen radiological findings considered, without being explicitly trained for that task, putting it at high risk for projection bias. We propose a label-conditional gradient reversal framework to make the model insensitive to projection, by forcing the extracted features to be simultaneously good for disease classification and bad for projection classification, resulting in a framework with reduced projection-induced bias.

  • 348
  • 4212