2024
Autores
Oliveira M.; Cerqueira R.; Pinto J.R.; Fonseca J.; Teixeira L.F.;
Publicação
IEEE Transactions on Intelligent Vehicles
Abstract
Autonomous Vehicles aim to understand their surrounding environment by detecting relevant objects in the scene, which can be performed using a combination of sensors. The accurate prediction of pedestrians is a particularly challenging task, since the existing algorithms have more difficulty detecting small objects. This work studies and addresses this often overlooked problem by proposing Multimodal PointPillars (M-PP), a fast and effective novel fusion architecture for 3D object detection. Inspired by both MVX-Net and PointPillars, image features from a 2D CNN-based feature map are fused with the 3D point cloud in an early fusion architecture. By changing the heavy 3D convolutions of MVX-Net to a set of convolutional layers in 2D space, along with combining LiDAR and image information at an early stage, M-PP considerably improves inference time over the baseline, running at 28.49 Hz. It achieves inference speeds suitable for real-world applications while keeping the high performance of multimodal approaches. Extensive experiments show that our proposed architecture outperforms both MVX-Net and PointPillars for the pedestrian class in the KITTI 3D object detection dataset, with 62.78% in
2024
Autores
Gomes, I; Teixeira, LF; van Rijn, JN; Soares, C; Restivo, A; Cunha, L; Santos, M;
Publicação
CoRR
Abstract
2024
Autores
Patrício, C; Barbano, CA; Fiandrotti, A; Renzulli, R; Grangetto, M; Teixeira, LF; Neves, JC;
Publicação
CoRR
Abstract
2024
Autores
Campos, F; Petrychenko, L; Teixeira, LF; Silva, W;
Publicação
Proceedings of the First Workshop on Explainable Artificial Intelligence for the Medical Domain (EXPLIMED 2024) co-located with 27th European Conference on Artificial Intelligence (ECAI 2024), Santiago de Compostela, Spain, October 20, 2024.
Abstract
Deep-learning techniques can improve the efficiency of medical diagnosis while challenging human experts’ accuracy. However, the rationale behind these classifier’s decisions is largely opaque, which is dangerous in sensitive applications such as healthcare. Case-based explanations explain the decision process behind these mechanisms by exemplifying similar cases using previous studies from other patients. Yet, these may contain personally identifiable information, which makes them impossible to share without violating patients’ privacy rights. Previous works have used GANs to generate anonymous case-based explanations, which had limited visual quality. We solve this issue by employing a latent diffusion model in a three-step procedure: generating a catalogue of synthetic images, removing the images that closely resemble existing patients, and using this anonymous catalogue during an explanation retrieval process. We evaluate the proposed method on the MIMIC-CXR-JPG dataset and achieve explanations that simultaneously have high visual quality, are anonymous, and retain their explanatory value.
2024
Autores
Miranda, I; Agrotis, G; Tan, RB; Teixeira, LF; Silva, W;
Publicação
46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2024, Orlando, FL, USA, July 15-19, 2024
Abstract
Breast cancer, the most prevalent cancer among women, poses a significant healthcare challenge, demanding effective early detection for optimal treatment outcomes. Mammography, the gold standard for breast cancer detection, employs low-dose X-rays to reveal tissue details, particularly cancerous masses and calcium deposits. This work focuses on evaluating the impact of incorporating anatomical knowledge to improve the performance and robustness of a breast cancer classification model. In order to achieve this, a methodology was devised to generate anatomical pseudo-labels, simulating plausible anatomical variations in cancer masses. These variations, encompassing changes in mass size and intensity, closely reflect concepts from the BI-RADs scale. Besides anatomical-based augmentation, we propose a novel loss term promoting the learning of cancer grading by our model. Experiments were conducted on publicly available datasets simulating both in-distribution and out-of-distribution scenarios to thoroughly assess the model's performance under various conditions.
2024
Autores
Aubard, M; Madureira, A; Teixeira, LF; Pinto, J;
Publicação
CoRR
Abstract
With the growing interest in underwater exploration and monitoring, autonomous underwater vehicles have become essential. The recent interest in onboard deep learning (DL) has advanced real-time environmental interaction capabilities relying on efficient and accurate vision-based DL models. However, the predominant use of sonar in underwater environments, characterized by limited training data and inherent noise, poses challenges to model robustness. This autonomy improvement raises safety concerns for deploying such models during underwater operations, potentially leading to hazardous situations. This article aims to provide the first comprehensive overview of sonar-based DL under the scope of robustness. It studies sonar-based DL perception task models, such as classification, object detection, segmentation, and simultaneous localization and mapping. Furthermore, this article systematizes sonar-based state-of-the-art data sets, simulators, and robustness methods, such as neural network verification, out-of-distribution, and adversarial attacks. This article highlights the lack of robustness in sonar-based DL research and suggests future research pathways, notably establishing a baseline sonar-based data set and bridging the simulation-to-reality gap. © 1976-2012 IEEE.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.