2024
Autores
Barbosa, D; Ferreira, M; Braz, G Jr; Salgado, M; Cunha, A;
Publicação
IEEE ACCESS
Abstract
This article presents a systematic review of Multiple Instance Learning (MIL) applied to image classification, specifically highlighting its applications in medical imaging. Motivated by the need for a comprehensive and up-to-date analysis due to the scarcity of recent reviews, this study uses defined selection criteria to systematically assess the quality and synthesize data from relevant studies. Focusing on MIL, a subfield of machine learning that deals with learning from sets of instances or bags, this review is crucial for medical diagnosis, where accurate lesion detection is a challenge. The review details the methodologies, advances and practical implementations of MIL, emphasizing the attention-grabbing and transformative mechanisms that improve the analysis of medical images. Challenges such as the need for extensive annotated datasets and significant computational resources are discussed. In addition, the review covers three main topics: the characterization of MIL algorithms in various imaging domains, a detailed evaluation of performance metrics, and a critical analysis of data structures and computational resources. Despite these challenges, MIL offers a promising direction for research with significant implications for medical diagnostics, highlighting the importance of continued exploration and improvement in this area.
2024
Autores
Camara, J; Cunha, A;
Publicação
MEDICINA-LITHUANIA
Abstract
Glaucoma is one of the leading causes of irreversible blindness in the world. Early diagnosis and treatment increase the chances of preserving vision. However, despite advances in techniques for the functional and structural assessment of the retina, specialists still encounter many challenges, in part due to the different presentations of the standard optic nerve head (ONH) in the population, the lack of explicit references that define the limits of glaucomatous optic neuropathy (GON), specialist experience, and the quality of patients' responses to some ancillary exams. Computer vision uses deep learning (DL) methodologies, successfully applied to assist in the diagnosis and progression of GON, with the potential to provide objective references for classification, avoiding possible biases in experts' decisions. To this end, studies have used color fundus photographs (CFPs), functional exams such as visual field (VF), and structural exams such as optical coherence tomography (OCT). However, it is still necessary to know the minimum limits of detection of GON characteristics performed through these methodologies. This study analyzes the use of deep learning (DL) methodologies in the various stages of glaucoma screening compared to the clinic to reduce the costs of GON assessment and the work carried out by specialists, to improve the speed of diagnosis, and to homogenize opinions. It concludes that the DL methodologies used in automated glaucoma screening can bring more robust results closer to reality.
2024
Autores
Fontes, M; de Almeida, JDS; Cunha, A;
Publicação
IEEE ACCESS
Abstract
Explainable Artificial Intelligence (XAI) is an area of growing interest, particularly in medical imaging, where example-based techniques show great potential. This paper is a systematic review of recent example-based XAI techniques, a promising approach that remains relatively unexplored in clinical practice and medical image analysis. A selection and analysis of recent studies using example-based XAI techniques for interpreting medical images was carried out. Several approaches were examined, highlighting how each contributes to increasing accuracy, transparency, and usability in medical applications. These techniques were compared and discussed in detail, considering their advantages and limitations in the context of medical imaging, with a focus on improving the integration of these technologies into clinical practice and medical decision-making. The review also pointed out gaps in current research, suggesting directions for future investigations. The need to develop XAI methods that are not only technically efficient but also ethically responsible and adaptable to the needs of healthcare professionals was emphasised. Thus, the paper sought to establish a solid foundation for understanding and advancing example-based XAI techniques in medical imaging, promoting a more integrated and patient-centred approach to medicine.
2024
Autores
António Cunha; Nuno M. Garcia; Jorge Marx Gómez; Sandra Pereira;
Publicação
Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering
Abstract
2024
Autores
Santos, T; Oliveira, H; Cunha, A;
Publicação
COMPUTER SCIENCE REVIEW
Abstract
In recent years, the number of crimes with weapons has grown on a large scale worldwide, mainly in locations where enforcement is lacking or possessing weapons is legal. It is necessary to combat this type of criminal activity to identify criminal behavior early and allow police and law enforcement agencies immediate action.Despite the human visual structure being highly evolved and able to process images quickly and accurately if an individual watches something very similar for a long time, there is a possibility of slowness and lack of attention. In addition, large surveillance systems with numerous equipment require a surveillance team, which increases the cost of operation. There are several solutions for automatic weapon detection based on computer vision; however, these have limited performance in challenging contexts.A systematic review of the current literature on deep learning-based weapon detection was conducted to identify the methods used, the main characteristics of the existing datasets, and the main problems in the area of automatic weapon detection. The most used models were the Faster R-CNN and the YOLO architecture. The use of realistic images and synthetic data showed improved performance. Several challenges were identified in weapon detection, such as poor lighting conditions and the difficulty of small weapon detection, the last being the most prominent. Finally, some future directions are outlined with a special focus on small weapon detection.
2024
Autores
Portela, F; Sousa, JJ; Araújo-Paredes, C; Peres, E; Morais, R; Pádua, L;
Publicação
SENSORS
Abstract
Grapevines (Vitis vinifera L.) are one of the most economically relevant crops worldwide, yet they are highly vulnerable to various diseases, causing substantial economic losses for winegrowers. This systematic review evaluates the application of remote sensing and proximal tools for vineyard disease detection, addressing current capabilities, gaps, and future directions in sensor-based field monitoring of grapevine diseases. The review covers 104 studies published between 2008 and October 2024, identified through searches in Scopus and Web of Science, conducted on 25 January 2024, and updated on 10 October 2024. The included studies focused exclusively on the sensor-based detection of grapevine diseases, while excluded studies were not related to grapevine diseases, did not use remote or proximal sensing, or were not conducted in field conditions. The most studied diseases include downy mildew, powdery mildew, Flavescence dor & eacute;e, esca complex, rots, and viral diseases. The main sensors identified for disease detection are RGB, multispectral, hyperspectral sensors, and field spectroscopy. A trend identified in recent published research is the integration of artificial intelligence techniques, such as machine learning and deep learning, to improve disease detection accuracy. The results demonstrate progress in sensor-based disease monitoring, with most studies concentrating on specific diseases, sensor platforms, or methodological improvements. Future research should focus on standardizing methodologies, integrating multi-sensor data, and validating approaches across diverse vineyard contexts to improve commercial applicability and sustainability, addressing both economic and environmental challenges.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.