2021
Authors
Zhang, O; Ding, C; Pereira, T; Xiao, R; Gadhoumi, K; Meisel, K; Lee, RJ; Chen, YR; Hu, X;
Publication
IEEE ACCESS
Abstract
Photoplethysmography (PPG) is a noninvasive way to monitor various aspects of the circulatory system, and is becoming more and more widespread in biomedical processing. Recently, deep learning methods for analyzing PPG have also become prevalent, achieving state of the art results on heart rate estimation, atrial fibrillation detection, and motion artifact identification. Consequently, a need for interpretable deep learning has arisen within the field of biomedical signal processing. In this paper, we pioneer novel explanatory metrics which leverage domain-expert knowledge to validate a deep learning model. We visualize model attention over a whole testset using saliency methods and compare it to human expert annotations. Congruence, our first metric, measures the proportion of model attention within expert-annotated regions. Our second metric, Annotation Classification, measures how much of the expert annotations our deep learning model pays attention to. Finally, we apply our metrics to compare between a signal based model and an image based model for PPG signal quality classification. Both models are deep convolutional networks based on the ResNet architectures. We show that our signal-based one dimensional model acts in a more explainable manner than our image based model; on average 50.78% of the one dimensional model's attention are within expert annotations, whereas 36.03% of the two dimensional model's attention are within expert annotations. Similarly, when thresholding the one dimensional model attention, one can more accurately predict if each pixel of the PPG is annotated as artifactual by an expert. Through this testcase, we demonstrate how our metrics can provide a quantitative and dataset-wide analysis of how explainable the model is.
2021
Authors
Oliveira, J; Nogueira, D; Renna, F; Ferreira, C; Jorge, AM; Coimbra, M;
Publication
2021 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC)
Abstract
Cardiac auscultation is the key screening procedure to detect and identify cardiovascular diseases (CVDs). One of many steps to automatically detect CVDs using auscultation, concerns the detection and delimitation of the heart sound boundaries, a process known as segmentation. Whether to include or not a segmentation step in the signal classification pipeline is nowadays a topic of discussion. Up to our knowledge, the outcome of a segmentation algorithm has been used almost exclusively to align the different signal segments according to the heartbeat. In this paper, the need for a heartbeat alignment step is tested and evaluated over different machine learning algorithms, including deep learning solutions. From the different classifiers tested, Gate Recurrent Unit (GRU) Network and Convolutional Neural Network (CNN) algorithms are shown to be the most robust. Namely, these algorithms can detect the presence of heart murmurs even without a heartbeat alignment step. Furthermore, Support Vector Machine (SVM) and Random Forest (RF) algorithms require an explicit segmentation step to effectively detect heart sounds and murmurs, the overall performance is expected drop approximately 5% on both cases.
2021
Authors
Wanderley, DS; Ferreira, CA; Campilho, A; Silva, JA;
Publication
CENTERIS 2021 - International Conference on ENTERprise Information Systems / ProjMAN 2021 - International Conference on Project MANagement / HCist 2021 - International Conference on Health and Social Care Information Systems and Technologies 2021, Braga, Portugal
Abstract
The detection of ovarian structures from ultrasound images is an important task in gynecological and reproductive medicine. An automatic detection system of ovarian structures can work as a second opinion for less experienced physicians or complex ultrasound interpretations. This work presents a study of three popular CNN-based object detectors applied to the detection of healthy ovarian structures, namely ovary and follicles, in B-mode ultrasound images. The Faster R-CNN presented the best results, with a precision of 95.5% and a recall of 94.7% for both classes, being able to detect all the ovaries correctly. The RetinaNet showed competitive results, exceeding 90% of precision and recall. Despite being very fast and suitable for real-time applications, YOLOv3 was ineffective in detecting ovaries and had the worst results detecting follicles. We also compare CNN results with classical computer vision methods presented in the ovarian follicle detection literature.
2021
Authors
Schüle, R; Timmann, D; Erasmus, CE; Reichbauer, J; Wayand, M; Baets, J; Balicza, P; Chinnery, P; Dürr, A; Haack, T; Hengel, H; Horvath, R; Houlden, H; Kamsteeg, EJ; Kamsteeg, C; Lohmann, K; Macaya, A; Marcé Grau, A; Maver, A; Molnar, J; Münchau, A; Peterlin, B; Riess, O; Schöls, L; Schüle, R; Stevanin, G; Synofzik, M; Timmerman, V; van de Warrenburg, B; van Os, N; Vandrovcova, J; Wayand, M; Wilke, C; van de Warrenburg, B; Schöls, L; Wilke, C; Bevot, A; Zuchner, S; Beltran, S; Laurie, S; Matalonga, L; Graessner, H; Synofzik, M; Graessner, H; Zurek, B; Ellwanger, K; Ossowski, S; Demidov, G; Sturm, M; Schulze Hentrich, JM; Heutink, P; Brunner, H; Scheffer, H; Hoogerbrugge, N; Hoischen, A; ’t Hoen, PAC; Vissers, LELM; Gilissen, C; Steyaert, W; Sablauskas, K; de Voer, RM; Janssen, E; de Boer, E; Steehouwer, M; Yaldiz, B; Kleefstra, T; Brookes, AJ; Veal, C; Gibson, S; Wadsley, M; Mehtarizadeh, M; Riaz, U; Warren, G; Dizjikan, FY; Shorter, T; Töpf, A; Straub, V; Bettolo, CM; Specht, S; Clayton Smith, J; Banka, S; Alexander, E; Jackson, A; Faivre, L; Thauvin, C; Vitobello, A; Denommé Pichon, AS; Duffourd, Y; Tisserant, E; Bruel, AL; Peyron, C; Pélissier, A; Beltran, S; Gut, IG; Laurie, S; Piscia, D; Matalonga, L; Papakonstantinou, A; Bullich, G; Corvo, A; Garcia, C; Fernandez Callejo, M; Hernández, C; Picó, D; Paramonov, I; Lochmüller, H; Gumus, G; Bros Facer, V; Rath, A; Hanauer, M; Olry, A; Lagorce, D; Havrylenko, S; Izem, K; Rigour, F; Durr, A; Davoine, CS; Guillot Noel, L; Heinzmann, A; Coarelli, G; Bonne, G; Evangelista, T; Allamand, V; Nelson, I; Yaou, RB; Metay, C; Eymard, B; Cohen, E; Atalaia, A; Stojkovic, T; Macek, M; Turnovec, M; Thomasová, D; Kremliková, RP; Franková, V; Havlovicová, M; Kremlik, V; Parkinson, H; Keane, T; Spalding, D; Senf, A; Robinson, P; Danis, D; Robert, G; Costa, A; Patch, C; Hanna, M; Houlden, H; Reilly, M; Vandrovcova, J; Muntoni, F; Zaharieva, I; Sarkozy, A; de Jonghe, P; Nigro, V; Banfi, S; Torella, A; Musacchia, F; Piluso, G; Ferlini, A; Selvatici, R; Rossi, R; Neri, M; Aretz, S; Spier, I; Sommer, AK; Peters, S; Oliveira, C; Pelaez, JG; Matos, AR; José, CS; Ferreira, M; Gullo, I; Fernandes, S; Garrido, L; Ferreira, P; Carneiro, F; Swertz, MA; Johansson, L; van der Velde, JK; van der Vries, G; Neerincx, PB; Roelofs Prins, D; Köhler, S; Metcalfe, A; Verloes, A; Drunat, S; Rooryck, C; Trimouille, A; Castello, R; Morleo, M; Pinelli, M; Varavallo, A; De la Paz, MP; Sánchez, EB; Martín, EL; Delgado, BM; de la Rosa, FJAG; Ciolfi, A; Dallapiccola, B; Pizzi, S; Radio, FC; Tartaglia, M; Renieri, A; Benetti, E; Balicza, P; Molnar, MJ; Maver, A; Peterlin, B; Münchau, A; Lohmann, K; Herzog, R; Pauly, M; Macaya, A; Marcé Grau, A; Osorio, AN; de Benito, DN; Lochmüller, H; Thompson, R; Polavarapu, K; Beeson, D; Cossins, J; Cruz, PMR; Hackman, P; Johari, M; Savarese, M; Udd, B; Horvath, R; Capella, G; Valle, L; Holinski Feder, E; Laner, A; Steinke Lange, V; Schröck, E; Rump, A;
Publication
European Journal of Human Genetics
Abstract
In the original publication of the article, consortium author lists were missing in the article. © 2021, The Author(s).
2021
Authors
Lima, J; Rocha, L; Rocha, C; Costa, P;
Publication
IAES International Journal of Robotics and Automation (IJRA)
Abstract
2021
Authors
Soares, L; Cruz, P; Novais, S; Ferreira, A; Frazao, O; Silva, S;
Publication
IEEE INSTRUMENTATION & MEASUREMENT MAGAZINE
Abstract
A refractometric sensor was applied to measure in real-time the concentration of Active Pharmaceutical Ingredients (APIs) in crystallization experiments. Paracetamol was used as a model system due to the extensive literature available for this API. The refractometric sensor was fabricated by a simple and inexpensive method that consisted in splicing a short section of a multimode fiber to a single mode fiber. The compact geometry of this sensor, with an external diameter of just $125\ \mu\mathrm{m}$, allowed it to measure the concentration of paracetamol, both in a stirred tank crystallizer operating in batch and in an oscillatory flow crystallizer operating continuously. The proposed technique shows the potential to monitor the concentration of APIs in crystallizers of different sizes and geometries as an alternative to more expensive and complex analysis equipment.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.