Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Tamás Karácsony is a Ph.D. candidate in the Carnegie Mellon Portugal affiliated Ph.D. (CMU Portugal) program in the Doctoral Program in Electrical and Computer Engineering (PDEEC), at the Department of Electrical and Computer Engineering of the Faculty of Engineering of the University of Porto (FEUP), Portugal, and a researcher at INESC-TEC: Institute for Systems and Computer Engineering, in the Center for Biomedical Engineering Research (C-BER) in the Biomedical Research And INnovation (BRAIN) research group. 


His PhD thesis "Explainable Deep Learning Based Epileptic Seizure Classification with Clinical 3D Motion Capture" is supervised by Prof. João Paulo Cunha and co-supervised by Prof. Fernando De la Torre. He is a visiting research scholar at the Computational Behavior (CUBE) Lab working with Prof. László A. Jeni, and at the Human Sensing Laboratory (HSL) working with Prof. Fernando De la Torre at The Robotics Institute (RI), Carnegie Mellon University (CMU). His research focuses on Advanced Human Sensing, 3D Motion Capture, Action and Pattern Recognition, Computer Vision, and Neuroengineering.


He earned an MSc degree with honours in Biomedical Engineering (2018) from the Technical University of Denmark (DTU), a BSc (2016), and an MSc with highest honours (2020) in Mechatronics Engineering from Budapest University of Technology and Economics (BUTE).

Interest
Topics
Details

Details

  • Name

    Tamas Karacsony
  • Role

    Research Assistant
  • Since

    01st May 2019
  • Nationality

    Hungria
  • Contacts

    +351222094000
    tamas.karacsony@inesctec.pt
Publications

2023

BlanketSet - A Clinical Real-World In-Bed Action Recognition and Qualitative Semi-Synchronised Motion Capture Dataset

Authors
Carmona, J; Karacsony, T; Cunha, JPS;

Publication
2023 IEEE 7TH PORTUGUESE MEETING ON BIOENGINEERING, ENBENG

Abstract
Clinical in-bed video-based human motion analysis is a very relevant computer vision topic for several relevant biomedical applications. Nevertheless, the main public large datasets (e.g. ImageNet or 3DPW) used for deep learning approaches lack annotated examples for these clinical scenarios. To address this issue, we introduce BlanketSet, an RGB-IRD action recognition dataset of sequences performed in a hospital bed. This dataset has the potential to help bridge the improvements attained in more general large datasets to these clinical scenarios. Information on how to access the dataset is available at rdm.inesctec.pt/dataset/nis-2022-004.

2023

BlanketGen - A Synthetic Blanket Occlusion Augmentation Pipeline for Motion Capture Datasets

Authors
Carmona, J; Karacsony, T; Cunha, JPS;

Publication
2023 IEEE 7TH PORTUGUESE MEETING ON BIOENGINEERING, ENBENG

Abstract
Human motion analysis has seen drastic improvements recently, however, due to the lack of representative datasets, for clinical in-bed scenarios it is still lagging behind. To address this issue, we implemented BlanketGen, a pipeline that augments videos with synthetic blanket occlusions. With this pipeline, we generated an augmented version of the pose estimation dataset 3DPW called BlanketGen3DPW. We then used this new dataset to fine-tune a Deep Learning model to improve its performance in these scenarios with promising results. Code and further information are available at https://gitlab.inesctec.pt/brain-lab/brainlab-public/blanket-gen-releases.

2023

Deep Learning Methods for Single Camera Based Clinical In-bed Movement Action Recognition

Authors
Karacsony, T; Jeni, LA; De La Torre Frade, F; Cunha, JPS;

Publication

Abstract
<p>Many clinical applications involve in-bed patient activity monitoring, from intensive care and neuro-critical infirmary, to semiology-based epileptic seizure diagnosis support or sleep monitoring at home, which require accurate recognition of in-bed movement actions from video streams.</p> <p>The major challenges of clinical application arise from the domain gap between common in-the-lab and clinical scenery (e.g. viewpoint, occlusions, out-of-domain actions), the requirement of minimally intrusive monitoring to already existing clinical practices (e.g. non-contact monitoring), and the significantly limited amount of labeled clinical action data available.</p> <p>Focusing on one of the most demanding in-bed clinical scenarios - semiology-based epileptic seizure classification – this review explores the challenges of video-based clinical in-bed monitoring, reviews video-based action recognition trends, monocular 3D MoCap, and semiology-based automated seizure classification approaches. Moreover, provides a guideline to take full advantage of transfer learning for in-bed action recognition for quantified, evidence-based clinical diagnosis support.</p> <p>The review suggests that an approach based on 3D MoCap and skeleton-based action recognition, strongly relying on transfer learning, could be advantageous for these clinical in-bed action recognition problems. However, these still face several challenges, such as spatio-temporal stability, occlusion handling, and robustness before realizing the full potential of this technology for routine clinical usage.</p>

2022

Novel 3D video action recognition deep learning approach for near real time epileptic seizure classification

Authors
Karacsony, T; Loesch-Biffar, AM; Vollmar, C; Remi, J; Noachtar, S; Cunha, JPS;

Publication
SCIENTIFIC REPORTS

Abstract
Seizure semiology is a well-established method to classify epileptic seizure types, but requires a significant amount of resources as long-term Video-EEG monitoring needs to be visually analyzed. Therefore, computer vision based diagnosis support tools are a promising approach. In this article, we utilize infrared (IR) and depth (3D) videos to show the feasibility of a 24/7 novel object and action recognition based deep learning (DL) monitoring system to differentiate between epileptic seizures in frontal lobe epilepsy (FLE), temporal lobe epilepsy (TLE) and non-epileptic events. Based on the largest 3Dvideo-EEG database in the world (115 seizures/+680,000 video-frames/427GB), we achieved a promising cross-subject validation f1-score of 0.833 +/- 0.061 for the 2 class (FLE vs. TLE) and 0.763 +/- 0.083 for the 3 class (FLE vs. TLE vs. non-epileptic) case, from 2 s samples, with an automated semi-specialized depth (Acc.95.65%) and Mask R-CNN (Acc.96.52%) based cropping pipeline to pre-process the videos, enabling a near-real-time seizure type detection and classification tool. Our results demonstrate the feasibility of our novel DL approach to support 24/7 epilepsy monitoring, outperforming all previously published methods.

2021

Deepepil: Towards an Epileptologist-Friendly AI Enabled Seizure Classification Cloud System based on Deep Learning Analysis of 3D videos

Authors
Karácsony, T; Loesch Biffar, AM; Vollmar, C; Noachtar, S; Cunha, JPS;

Publication
BHI 2021 - 2021 IEEE EMBS International Conference on Biomedical and Health Informatics, Proceedings

Abstract
Epilepsy is a major neurological disorder affecting approximately 1% of the world population, where seizure semiology is an essential tool for clinical evaluation of seizures. This includes qualitative visual inspection of videos from the seizures in epilepsy monitoring units by epileptologists. In order to support this clinical diagnosis process, promising deep learning-based systems were proposed. However, these indicate that video datasets of epileptic seizures are still rare and limited in size. In order to enable the full potential of AI systems for epileptic seizure diagnosis support and research, a novel collaborative development framework is proposed for a scalable DL-assisted clinical research and diagnosis support of epileptic seizures. The designed cloud-based approach integrates our deployed and tested NeuroKinect data acquisition pipeline into an MLOps framework to scale data set extension and analysis to a multi-clinical utilization. The proposed development framework incorporates an MLOps approach, to ensure convenient collaboration between clinicians and data scientists, providing continuous advantages to both user groups. It addresses methods for efficient utilization of HW, SW and human resources. In the future, the system is going to be expanded with several AI-based tools. Such as DL-based automated 3D motion capture (MoCap), 3D movement analysis support, quantitative seizure semiology analysis tools, video-based MOI and seizure classification. © 2021 IEEE