2020
Autores
Santos, JC; Abreu, MH; Santos, MS; Duarte, H; Alpoim, T; Sousa, S; Abreu, PH;
Publicação
JOURNAL OF CLINICAL ONCOLOGY
Abstract
2015
Autores
Santos, MS; Abreu, PH; Garcia Laencina, PJ; Simao, A; Carvalho, A;
Publicação
JOURNAL OF BIOMEDICAL INFORMATICS
Abstract
Liver cancer is the sixth most frequently diagnosed cancer and, particularly, Hepatocellular Carcinoma (HCC) represents more than 90% of primary liver cancers. Clinicians assess each patient's treatment on the basis of evidence-based medicine, which may not always apply to a specific patient, given the biological variability among individuals. Over the years, and for the particular case of Hepatocellular Carcinoma, some research studies have been developing strategies for assisting clinicians in decision making, using computational methods (e.g. machine learning techniques) to extract knowledge from the clinical data. However, these studies have some limitations that have not yet been addressed: some do not focus entirely on Hepatocellular Carcinoma patients, others have strict application boundaries, and none considers the heterogeneity between patients nor the presence of missing data, a common drawback in healthcare contexts. In this work, a real complex Hepatocellular Carcinoma database composed of heterogeneous clinical features is studied. We propose a new cluster-based oversampling approach robust to small and imbalanced datasets, which accounts for the heterogeneity of patients with Hepatocellular Carcinoma. The preprocessing procedures of this work are based on data imputation considering appropriate distance metrics for both heterogeneous and missing data (HEOM) and clustering studies to assess the underlying patient groups in the studied dataset (K-means). The final approach is applied in order to diminish the impact of underlying patient profiles with reduced sizes on survival prediction. It is based on K-means clustering and the SMOTE algorithm to build a representative dataset and use it as training example for different machine learning procedures (logistic regression and neural networks). The results are evaluated in terms of survival prediction and compared across baseline approaches that do not consider clustering and/or oversampling using the Friedman rank test. Our proposed methodology coupled with neural networks outperformed all others, suggesting an improvement over the classical approaches currently used in Hepatocellular Carcinoma prediction models.
2024
Autores
Santos, JC; Santos, MS; Abreu, PH;
Publicação
ADVANCES IN INTELLIGENT DATA ANALYSIS XXII, PT I, IDA 2024
Abstract
Medical imaging classification improves patient prognoses by providing information on disease assessment, staging, and treatment response. The high demand for medical imaging acquisition requires the development of effective classification methodologies, occupying deep learning technologies, the pool position for this task. However, the major drawback of such techniques relies on their black-box nature which has delayed their use in real-world scenarios. Interpretability methodologies have emerged as a solution for this problem due to their capacity to translate black-box models into clinical understandable information. The most promising interpretability methodologies are concept-based techniques that can understand the predictions of a deep neural network through user-specified concepts. Concept activation regions and concept activation vectors are concept-based implementations that provide global explanations for the prediction of neural networks. The explanations provided allow the identification of the relationships that the network learned and can be used to identify possible errors during training. In this work, concept activation vectors and concept activation regions are used to identify flaws in neural network training and how this weakness can be mitigated in a human-in-the-loop process automatically improving the performance and trustworthiness of the classifier. To reach such a goal, three phases have been defined: training baseline classifiers, applying the concept-based interpretability, and implementing a human-in-the-loop approach to improve classifier performance. Four medical imaging datasets of different modalities are included in this study to prove the generality of the proposed method. The results identified concepts in each dataset that presented flaws in the classifier training and consequently, the human-in-the-loop approach validated by a team of 2 clinicians team achieved a statistically significant improvement.
2017
Autores
Santos M.S.; Soares J.P.; Abreu P.H.; Araújo H.; Santos J.;
Publicação
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Abstract
Dealing with missing data is a crucial step in the preprocessing stage of most data mining projects. Especially in healthcare contexts, addressing this issue is fundamental, since it may result in keeping or loosing critical patient information that can help physicians in their daily clinical practice. Over the years, many researchers have addressed this problem, basing their approach on the implementation of a set of imputation techniques and evaluating their performance in classification tasks. These classic approaches, however, do not consider some intrinsic data information that could be related to the performance of those algorithms, such as features’ distribution. Establishing a correspondence between data distribution and the most proper imputation method avoids the need of repeatedly testing a large set of methods, since it provides a heuristic on the best choice for each feature in the study. The goal of this work is to understand the relationship between data distribution and the performance of well-known imputation techniques, such as Mean, Decision Trees, k-Nearest Neighbours, Self-Organizing Maps and Support Vector Machines imputation. Several publicly available datasets, all complete, were selected attending to several characteristics such as number of distributions, features and instances. Missing values were artificially generated at different percentages and the imputation methods were evaluated in terms of Predictive and Distributional Accuracy. Our findings show that there is a relationship between features’ distribution and algorithms’ performance, although some factors must be taken into account, such as the number of features per distribution and the missing rate at state.
2019
Autores
Santos, MS; Pereira, RC; Costa, AF; Soares, JP; Santos, JAM; Abreu, PH;
Publicação
IEEE Access
Abstract
2017
Autores
Santos, MS; Abreu, PH; García Laencina, PJ; Simão, A; Carvalho, A;
Publicação
Abstract
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.