2022
Authors
Silva, F; Pereira, T; Neves, I; Morgado, J; Freitas, C; Malafaia, M; Sousa, J; Fonseca, J; Negrao, E; de Lima, BF; da Silva, MC; Madureira, AJ; Ramos, I; Costa, JL; Hespanhol, V; Cunha, A; Oliveira, HP;
Publication
JOURNAL OF PERSONALIZED MEDICINE
Abstract
Advancements in the development of computer-aided decision (CAD) systems for clinical routines provide unquestionable benefits in connecting human medical expertise with machine intelligence, to achieve better quality healthcare. Considering the large number of incidences and mortality numbers associated with lung cancer, there is a need for the most accurate clinical procedures; thus, the possibility of using artificial intelligence (AI) tools for decision support is becoming a closer reality. At any stage of the lung cancer clinical pathway, specific obstacles are identified and motivate the application of innovative AI solutions. This work provides a comprehensive review of the most recent research dedicated toward the development of CAD tools using computed tomography images for lung cancer-related tasks. We discuss the major challenges and provide critical perspectives on future directions. Although we focus on lung cancer in this review, we also provide a more clear definition of the path used to integrate AI in healthcare, emphasizing fundamental research points that are crucial for overcoming current barriers.
2022
Authors
Esengonul, M; Marta, A; Beirao, J; Pires, IM; Cunha, A;
Publication
MEDICINA-LITHUANIA
Abstract
Nowadays, Artificial Intelligence (AI) and its subfields, Machine Learning (ML) and Deep Learning (DL), are used for a variety of medical applications. It can help clinicians track the patient's illness cycle, assist with diagnosis, and offer appropriate therapy alternatives. Each approach employed may address one or more AI problems, such as segmentation, prediction, recognition, classification, and regression. However, the amount of AI-featured research on Inherited Retinal Diseases (IRDs) is currently limited. Thus, this study aims to examine artificial intelligence approaches used in managing Inherited Retinal Disorders, from diagnosis to treatment. A total of 20,906 articles were identified using the Natural Language Processing (NLP) method from the IEEE Xplore, Springer, Elsevier, MDPI, and PubMed databases, and papers submitted from 2010 to 30 October 2021 are included in this systematic review. The resultant study demonstrates the AI approaches utilized on images from different IRD patient categories and the most utilized AI architectures and models with their imaging modalities, identifying the main benefits and challenges of using such methods.
2022
Authors
Camara, J; Silva, B; Gouveia, A; Pires, IM; Coelho, P; Cunha, A;
Publication
SENSORS
Abstract
Ideally, to carry out screening for eye diseases, it is expected to use specialized medical equipment to capture retinal fundus images. However, since this kind of equipment is generally expensive and has low portability, and with the development of technology and the emergence of smartphones, new portable and cheaper screening options have emerged, one of them being the D-Eye device. When compared to specialized equipment, this equipment and other similar devices associated with a smartphone present lower quality and less field-of-view in the retinal video captured, yet with sufficient quality to perform a medical pre-screening. Individuals can be referred for specialized screening to obtain a medical diagnosis if necessary. Two methods were proposed to extract the relevant regions from these lower-quality videos (the retinal zone). The first one is based on classical image processing approaches such as thresholds and Hough Circle transform. The other performs the extraction of the retinal location by applying a neural network, which is one of the methods reported in the literature with good performance for object detection, the YOLO v4, which was demonstrated to be the preferred method to apply. A mosaicing technique was implemented from the relevant retina regions to obtain a more informative single image with a higher field of view. It was divided into two stages: the GLAMpoints neural network was applied to extract relevant points in the first stage. Some homography transformations are carried out to have in the same referential the overlap of common regions of the images. In the second stage, a smoothing process was performed in the transition between images.
2022
Authors
Camara, J; Neto, A; Pires, IM; Villasana, MV; Zdravevski, E; Cunha, A;
Publication
DIAGNOSTICS
Abstract
Glaucoma is a chronic optic neuropathy characterized by irreversible damage to the retinal nerve fiber layer (RNFL), resulting in changes in the visual field (VC). Glaucoma screening is performed through a complete ophthalmological examination, using images of the optic papilla obtained in vivo for the evaluation of glaucomatous characteristics, eye pressure, and visual field. Identifying the glaucomatous papilla is quite important, as optical papillary images are considered the gold standard for tracking. Therefore, this article presents a review of the diagnostic methods used to identify the glaucomatous papilla through technology over the last five years. Based on the analyzed works, the current state-of-the-art methods are identified, the current challenges are analyzed, and the shortcomings of these methods are investigated, especially from the point of view of automation and independence in performing these measurements. Finally, the topics for future work and the challenges that need to be solved are proposed.
2022
Authors
Renna, F; Martins, M; Neto, A; Cunha, A; Libanio, D; Dinis-Ribeiro, M; Coimbra, M;
Publication
DIAGNOSTICS
Abstract
Stomach cancer is the third deadliest type of cancer in the world (0.86 million deaths in 2017). In 2035, a 20% increase will be observed both in incidence and mortality due to demographic effects if no interventions are foreseen. Upper GI endoscopy (UGIE) plays a paramount role in early diagnosis and, therefore, improved survival rates. On the other hand, human and technical factors can contribute to misdiagnosis while performing UGIE. In this scenario, artificial intelligence (AI) has recently shown its potential in compensating for the pitfalls of UGIE, by leveraging deep learning architectures able to efficiently recognize endoscopic patterns from UGIE video data. This work presents a review of the current state-of-the-art algorithms in the application of AI to gastroscopy. It focuses specifically on the threefold tasks of assuring exam completeness (i.e., detecting the presence of blind spots) and assisting in the detection and characterization of clinical findings, both gastric precancerous conditions and neoplastic lesion changes. Early and promising results have already been obtained using well-known deep learning architectures for computer vision, but many algorithmic challenges remain in achieving the vision of AI-assisted UGIE. Future challenges in the roadmap for the effective integration of AI tools within the UGIE clinical practice are discussed, namely the adoption of more robust deep learning architectures and methods able to embed domain knowledge into image/video classifiers as well as the availability of large, annotated datasets.
2022
Authors
Camara, J; Rezende, R; Pires, IM; Cunha, A;
Publication
JOURNAL OF CLINICAL MEDICINE
Abstract
Public databases for glaucoma studies contain color images of the retina, emphasizing the optic papilla. These databases are intended for research and standardized automated methodologies such as those using deep learning techniques. These techniques are used to solve complex problems in medical imaging, particularly in the automated screening of glaucomatous disease. The development of deep learning techniques has demonstrated potential for implementing protocols for large-scale glaucoma screening in the population, eliminating possible diagnostic doubts among specialists, and benefiting early treatment to delay the onset of blindness. However, the images are obtained by different cameras, in distinct locations, and from various population groups and are centered on multiple parts of the retina. We can also cite the small number of data, the lack of segmentation of the optic papillae, and the excavation. This work is intended to offer contributions to the structure and presentation of public databases used in the automated screening of glaucomatous papillae, adding relevant information from a medical point of view. The gold standard public databases present images with segmentations of the disc and cupping made by experts and division between training and test groups, serving as a reference for use in deep learning architectures. However, the data offered are not interchangeable. The quality and presentation of images are heterogeneous. Moreover, the databases use different criteria for binary classification with and without glaucoma, do not offer simultaneous pictures of the two eyes, and do not contain elements for early diagnosis.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.