2019
Authors
Pereira, RC; Abreu, PH; Polisciuc, E; Machado, P;
Publication
Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2019, Volume 3: IVAPP, Prague, Czech Republic, February 25-27, 2019.
Abstract
2019
Authors
Martins, N; Cruz, JM; Cruz, T; Abreu, PH;
Publication
Progress in Artificial Intelligence, 19th EPIA Conference on Artificial Intelligence, EPIA 2019, Vila Real, Portugal, September 3-6, 2019, Proceedings, Part II.
Abstract
2021
Authors
Salazar, T; Santos, MS; Araújo, H; Abreu, PH;
Publication
IEEE Access
Abstract
2022
Authors
Santos, MS; Abreu, PH; Fernández, A; Luengo, J; Santos, JAM;
Publication
Eng. Appl. Artif. Intell.
Abstract
2024
Authors
Perdigão, D; Cruz, T; Simões, P; Abreu, PH;
Publication
NOMS 2024 IEEE Network Operations and Management Symposium, Seoul, Republic of Korea, May 6-10, 2024
Abstract
2024
Authors
Santos, JC; Santos, MS; Abreu, PH;
Publication
ADVANCES IN INTELLIGENT DATA ANALYSIS XXII, PT I, IDA 2024
Abstract
Medical imaging classification improves patient prognoses by providing information on disease assessment, staging, and treatment response. The high demand for medical imaging acquisition requires the development of effective classification methodologies, occupying deep learning technologies, the pool position for this task. However, the major drawback of such techniques relies on their black-box nature which has delayed their use in real-world scenarios. Interpretability methodologies have emerged as a solution for this problem due to their capacity to translate black-box models into clinical understandable information. The most promising interpretability methodologies are concept-based techniques that can understand the predictions of a deep neural network through user-specified concepts. Concept activation regions and concept activation vectors are concept-based implementations that provide global explanations for the prediction of neural networks. The explanations provided allow the identification of the relationships that the network learned and can be used to identify possible errors during training. In this work, concept activation vectors and concept activation regions are used to identify flaws in neural network training and how this weakness can be mitigated in a human-in-the-loop process automatically improving the performance and trustworthiness of the classifier. To reach such a goal, three phases have been defined: training baseline classifiers, applying the concept-based interpretability, and implementing a human-in-the-loop approach to improve classifier performance. Four medical imaging datasets of different modalities are included in this study to prove the generality of the proposed method. The results identified concepts in each dataset that presented flaws in the classifier training and consequently, the human-in-the-loop approach validated by a team of 2 clinicians team achieved a statistically significant improvement.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.