Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Tamás Karácsony is a Ph.D. candidate in the Carnegie Mellon Portugal affiliated Ph.D. (CMU Portugal) program in the Doctoral Program in Electrical and Computer Engineering (PDEEC), at the Department of Electrical and Computer Engineering of the Faculty of Engineering of the University of Porto (FEUP), Portugal, and a researcher at INESC-TEC: Institute for Systems and Computer Engineering, in the Center for Biomedical Engineering Research (C-BER) in the Biomedical Research And INnovation (BRAIN) research group. 


His PhD thesis "Explainable Deep Learning Based Epileptic Seizure Classification with Clinical 3D Motion Capture" is supervised by Prof. João Paulo Cunha and co-supervised by Prof. Fernando De la Torre. He is a visiting research scholar at the Computational Behavior (CUBE) Lab working with Prof. László A. Jeni, and at the Human Sensing Laboratory (HSL) working with Prof. Fernando De la Torre at The Robotics Institute (RI), Carnegie Mellon University (CMU). His research focuses on Advanced Human Sensing, 3D Motion Capture, Action and Pattern Recognition, Computer Vision, and Neuroengineering.


He earned an MSc degree with honours in Biomedical Engineering (2018) from the Technical University of Denmark (DTU), a BSc (2016), and an MSc with highest honours (2020) in Mechatronics Engineering from Budapest University of Technology and Economics (BUTE).

Interest
Topics
Details

Details

  • Name

    Tamas Karacsony
  • Role

    Research Assistant
  • Since

    01st May 2019
  • Nationality

    Hungria
  • Contacts

    +351222094000
    tamas.karacsony@inesctec.pt
Publications

2024

Deep learning methods for single camera based clinical in-bed movement action recognition

Authors
Karácsony, T; Jeni, LA; de la Torre, F; Cunha, JPS;

Publication
IMAGE AND VISION COMPUTING

Abstract
Many clinical applications involve in-bed patient activity monitoring, from intensive care and neuro-critical infirmary, to semiology-based epileptic seizure diagnosis support or sleep monitoring at home, which require accurate recognition of in-bed movement actions from video streams. The major challenges of clinical application arise from the domain gap between common in-the-lab and clinical scenery (e.g. viewpoint, occlusions, out-of-domain actions), the requirement of minimally intrusive monitoring to already existing clinical practices (e.g. non-contact monitoring), and the significantly limited amount of labeled clinical action data available. Focusing on one of the most demanding in-bed clinical scenarios - semiology-based epileptic seizure classification - this review explores the challenges of video-based clinical in-bed monitoring, reviews video-based action recognition trends, monocular 3D MoCap, and semiology-based automated seizure classification approaches. Moreover, provides a guideline to take full advantage of transfer learning for in-bed action recognition for quantified, evidence-based clinical diagnosis support. The review suggests that an approach based on 3D MoCap and skeleton-based action recognition, strongly relying on transfer learning, could be advantageous for these clinical in-bed action recognition problems. However, these still face several challenges, such as spatio-temporal stability, occlusion handling, and robustness before realizing the full potential of this technology for routine clinical usage.

2024

Brain Anterior Nucleus of the Thalamus Signal as a Biomarker of Upper Voluntary Repetitive Movements in Epilepsy Patients

Authors
Lopes, EM; Pimentel, M; Karácsony, T; Rego, R; Cunha, JPS;

Publication
2024 IEEE 22ND MEDITERRANEAN ELECTROTECHNICAL CONFERENCE, MELECON 2024

Abstract
The Deep Brain Stimulation of the Anterior Nucleus of the Thalamus (ANT-DBS) is an effective treatment for refractory epilepsy. In order to assess the involvement of the ANT during voluntary hand repetitive movements similar to some seizure-induced ones, we simultaneously collected videoelectroencephalogram ( vEEG) and ANT-Local Field Potential (LFPs) signals from two epilepsy patients implanted with the PerceptTM PC neurostimulator, who stayed at an Epilepsy Monitoring Unit (EMU) for a 5 day period. For this purpose, a repetitive voluntary movement execution protocol was designed and an event-related desynchronisation/synchronisation (ERD/ERS) analysis was performed. We found a power increase in alpha and theta frequency bands during movement execution for both patients. The same pattern was not found when patients were at rest. Furthermore, a similar increase of relative power was found in LFPs from other neighboring basal ganglia. This suggests that the ERS pattern may be associated to upper limb automatisms, indicating that the ANT and other basal ganglia may be involved in the execution of these repetitive movements. These findings may open a new window for the study of seizure-induced movements (semiology) as biomarkers of the beginning of seizures, which can be helpful for the future of adaptive DBS techniques for better control of epileptic seizures of these patients.

2024

NeuroKinect4K: A Novel 4K RGB-D-IR Video System with 3D Scene Reconstruction for Enhanced Epileptic Seizure Semiology Monitoring

Authors
Karácsony T.; Fearns N.; Vollmar C.; Birk D.; Rémi J.; Noachtar S.; Silva Cunha J.P.;

Publication
Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS

Abstract
Epileptic seizures are clearly characterized by their displayed behavior, the semiology, which is used in diagnosis and classification as a base for therapy. This article presents a novel 4K 3D video recording and reviewing system for epilepsy monitoring, introducing a novel perspective and allowing continuous recording and review of 3D videos in the epilepsy monitoring unit (EMU), providing significantly more detail than the current clinical systems, which can lead to the recognition of more Movements of Interest (MOIs) and may reduce inter-rater variability. To put the system to an initial test in clinical practice the article presents three real-world examples of subtle MOIs, that could only be appreciated on the 4K-video, but not on the VGA-video, recorded as part of the clinical routine. In conclusion, a 4K-RGB recording, 3D cropping, and 3D video playing system was developed, implemented, and tested for realworld clinical scenarios, considering the specific requirements of clinical monitoring in EMUs. The new data acquisition setup can support clinical diagnosis, which may lead to new insights in the field of epilepsy and the development of AI approaches in the future.

2023

BlanketSet - A Clinical Real-World In-Bed Action Recognition and Qualitative Semi-Synchronised Motion Capture Dataset

Authors
Carmona, J; Karacsony, T; Cunha, JPS;

Publication
2023 IEEE 7TH PORTUGUESE MEETING ON BIOENGINEERING, ENBENG

Abstract
Clinical in-bed video-based human motion analysis is a very relevant computer vision topic for several relevant biomedical applications. Nevertheless, the main public large datasets (e.g. ImageNet or 3DPW) used for deep learning approaches lack annotated examples for these clinical scenarios. To address this issue, we introduce BlanketSet, an RGB-IRD action recognition dataset of sequences performed in a hospital bed. This dataset has the potential to help bridge the improvements attained in more general large datasets to these clinical scenarios. Information on how to access the dataset is available at rdm.inesctec.pt/dataset/nis-2022-004.

2023

BlanketGen - A Synthetic Blanket Occlusion Augmentation Pipeline for Motion Capture Datasets

Authors
Carmona, J; Karacsony, T; Cunha, JPS;

Publication
2023 IEEE 7TH PORTUGUESE MEETING ON BIOENGINEERING, ENBENG

Abstract
Human motion analysis has seen drastic improvements recently, however, due to the lack of representative datasets, for clinical in-bed scenarios it is still lagging behind. To address this issue, we implemented BlanketGen, a pipeline that augments videos with synthetic blanket occlusions. With this pipeline, we generated an augmented version of the pose estimation dataset 3DPW called BlanketGen3DPW. We then used this new dataset to fine-tune a Deep Learning model to improve its performance in these scenarios with promising results. Code and further information are available at https://gitlab.inesctec.pt/brain-lab/brainlab-public/blanket-gen-releases.