Tim Büchner, M.Sc.
Curriculum Vitae
Since 2021 | Research Associate |
---|---|
Computer Vision Group, Friedrich Schiller University Jena | |
Research Topic: “4D Facial Muscle Analysis” | |
2018-2021 | M.Sc. in Computer Science |
Friedrich Schiller University Jena | |
Focus: Computer Vision | |
Master Thesis: “A Comparative Analysis of Deep Latent State Space Models” at Computer Vision Group Jena | |
2019 | AI Research Assistent |
Anatomic Institute 2 at University Hospital Jena | |
2015-2018 | B.Sc. in Computer Science |
Friedrich Schiller University Jena | |
Bachelor Thesis: “Repräsentation, Retrieval und Visualisierung genealogischer Daten” | |
(Representation, retrieval and visualization of genealogic data) at AI Chair |
Research Interests
- Superresolution with DenseNets
- Time series analysis based on latent space models
- 3D Computer Vision
Publications
2025
Tim Büchner, Sven Sickert, Gerd F. Volk, Orlando Guntinas-Lichius, Joachim Denzler:
Assessing 3D Volumetric Asymmetry in Facial Palsy Patients via Advanced Multi-view Landmarks and Radial Curves.
Machine Vision and Applications. 36 (1) : 2025.
[bibtex] [doi] [abstract]
Assessing 3D Volumetric Asymmetry in Facial Palsy Patients via Advanced Multi-view Landmarks and Radial Curves.
Machine Vision and Applications. 36 (1) : 2025.
[bibtex] [doi] [abstract]
The research on facial palsy, a unilateral palsy of the facial nerve, is a complex field with many different causes and symptoms. Even modern approaches to evaluate the facial palsy state rely mainly on stills and 2D videos of the face and rarely on dynamic 3D information. Many of these analysis and visualization methods require manual intervention, which is time-consuming and error-prone. Moreover, they often depend on alignment algorithms or Euclidean measurements and consider only static facial expressions. Volumetric changes by muscle movement are essential for facial palsy analysis but require manual extraction. We propose to extract an estimated unilateral volumetric description for dynamic expressions from 3D scans. Accurate landmark positioning is required for processing the unstructured facial scans. In our case, it is attained via a multi-view method compatible with any existing 2D predictors. We analyze prediction stability and robustness against head rotation during video sequences. Further, we investigate volume changes in static and dynamic facial expressions for 34 patients with unilateral facial palsy and visualize volumetric disparities on the face surface. In a case study, we observe a decrease in the volumetric difference between the face sides during happy expressions at the beginning (13.8 +- 10.0 mm3) and end (12.8 +- 10.3 mm3) of a ten-day biofeedback therapy. The neutral face kept a consistent volume range of 11.8-12.1 mm3. The reduced volumetric difference after therapy indicates less facial asymmetry during movement, which can be used to monitor and guide treatment decisions. Our approach minimizes human intervention, simplifying the clinical routine and interaction with 3D scans to provide a more comprehensive analysis of facial palsy.
2024
Lukas Schuhmann, Tim Büchner, Martin Heinrich, Gerd Fabian Volk, Joachim Denzler, Orlando Guntinas-Lichius:
Automated Analysis of Spontaneous Eye Blinking in Patients with Acute Facial Palsy or Facial Synkinesis.
Scientific Reports. 14 (1) : pp. 17726. 2024.
[bibtex] [pdf] [web] [doi] [abstract]
Automated Analysis of Spontaneous Eye Blinking in Patients with Acute Facial Palsy or Facial Synkinesis.
Scientific Reports. 14 (1) : pp. 17726. 2024.
[bibtex] [pdf] [web] [doi] [abstract]
Although patients with facial palsy often complain of disturbed eye blinking which may lead to visual impairment, a blinking analysis is not part of routine grading of facial palsy. Twenty minutes of spontaneous eye blinking at rest of 30 patients with facial palsy (6 with acute palsy; 24 patients with facial synkinesis; median age: 58~years, 67\% female), and 30 matched healthy probands (median age: 57~years; 67\% female) was smart phone video recorded. A custom computer program automatically extracted eye measures and determined the eye closure rate (eye aspect ratio [EAR]), blink frequency, and blink duration. Facial Clinimetric Evaluation (FaCE), Facial Disability Index (FDI) were assessed as patient-reported outcome measures. The minimal EAR, i.e., minimal visible eye surface during blinking, was significantly higher on the paretic side in patients with acute facial palsy than in patients with synkinesis or in healthy controls. The blinking frequency on the affected side was significantly lower in both patient groups compared to healthy controls. Vice versa, blink duration was longer in both patient groups. There was no clear correlation between the blinking values and FaCE and FDI. Blinking parameters are easy to estimate automatically and add a functionally important parameter to facial grading.
Sai Karthikeya Vemuri, Tim Büchner, Joachim Denzler:
Estimating Soil Hydraulic Parameters for Unsaturated Flow using Physics-Informed Neural Networks.
International Conference on Computational Science (ICCS). Pages 338-351. 2024.
[bibtex] [doi] [abstract]
Estimating Soil Hydraulic Parameters for Unsaturated Flow using Physics-Informed Neural Networks.
International Conference on Computational Science (ICCS). Pages 338-351. 2024.
[bibtex] [doi] [abstract]
Water movement in soil is essential for weather monitoring, prediction of natural disasters, and agricultural water management. Richardson-Richards' equation (RRE) is the characteristic partial differential equation for studying soil water movement. RRE is a non-linear PDE involving water potential, hydraulic conductivity, and volumetric water content. This equation has underlying non-linear parametric relationships called water retention curves (WRCs) and hydraulic conductivity functions (HCFs). This two-level non-linearity makes the problem of unsaturated water flow of soils challenging to solve. Physics-Informed Neural Networks (PINNs) offer a powerful paradigm to combine physics in data-driven techniques. From noisy or sparse observations of one variable (water potential), we use PINNs to learn the complete system, estimate the parameters of the underlying model, and further facilitate the prediction of infiltration and discharge. We employ training on RRE, WRC, HCF, and measured values to resolve two-level non-linearity directly instead of explicitly deriving water potential or volumetric water content-based formulations. The parameters to be estimated are made trainable with initialized values. We take water potential data from simulations and use this data to solve the inverse problem with PINN and compare estimated parameters, volumetric water content, and hydraulic conductivity with actual values. We chose different types of parametric relationships and wetting conditions to show the approach's effectiveness.
Sai Karthikeya Vemuri, Tim Büchner, Julia Niebling, Joachim Denzler:
Functional Tensor Decompositions for Physics-Informed Neural Networks.
International Conference on Pattern Recognition (ICPR). Pages 32-46. 2024. Best Paper Award
[bibtex] [web] [doi] [code] [abstract]
Functional Tensor Decompositions for Physics-Informed Neural Networks.
International Conference on Pattern Recognition (ICPR). Pages 32-46. 2024. Best Paper Award
[bibtex] [web] [doi] [code] [abstract]
Physics-Informed Neural Networks (PINNs) have shown continuous promise in approximating partial differential equations (PDEs), although they remain constrained by the curse of dimensionality. In this paper, we propose a generalized PINN version of the classical variable separable method. To do this, we first show that, using the universal approximation theorem, a multivariate function can be approximated by the outer product of neural networks, whose inputs are separated variables. We leverage tensor decomposition forms to separate the variables in a PINN setting. By employing Canonic Polyadic (CP), Tensor-Train (TT), and Tucker decomposition forms within the PINN framework, we create robust architectures for learning multivariate functions from separate neural networks connected by outer products. Our methodology significantly enhances the performance of PINNs, as evidenced by improved results on complex high-dimensional PDEs, including the 3d Helmholtz and 5d Poisson equations, among others. This research underscores the potential of tensor decomposition-based variably separated PINNs to surpass the state-of-the-art, offering a compelling solution to the dimensionality challenge in PDE approximation.
Tim Büchner, Niklas Penzel, Orlando Guntinas-Lichius, Joachim Denzler:
Facing Asymmetry - Uncovering the Causal Link between Facial Symmetry and Expression Classifiers using Synthetic Interventions.
Asian Conference on Computer Vision (ACCV). 2024. (accepted at ACCV)
[bibtex] [pdf] [abstract]
Facing Asymmetry - Uncovering the Causal Link between Facial Symmetry and Expression Classifiers using Synthetic Interventions.
Asian Conference on Computer Vision (ACCV). 2024. (accepted at ACCV)
[bibtex] [pdf] [abstract]
Understanding expressions is vital for deciphering human behavior, and nowadays, end-to-end trained black box models achieve high performance. Due to the black-box nature of these models, it is unclear how they behave when applied out-of-distribution. Specifically, these models show decreased performance for unilateral facial palsy patients. We hypothesize that one crucial factor guiding the internal decision rules is facial symmetry. In this work, we use insights from causal reasoning to investigate the hypothesis. After deriving a structural causal model, we develop a synthetic interventional framework. This approach allows us to analyze how facial symmetry impacts a network's output behavior while keeping other factors fixed. All 17 investigated expression classifiers significantly lower their output activations for reduced symmetry. This result is congruent with observed behavior on real-world data from healthy subjects and facial palsy patients. As such, our investigation serves as a case study for identifying causal factors that influence the behavior of black-box models.
Tim Büchner, Niklas Penzel, Orlando Guntinas-Lichius, Joachim Denzler:
The Power of Properties: Uncovering the Influential Factors in Emotion Classification.
International Conference on Pattern Recognition and Artificial Intelligence (ICPRAI). 2024.
[bibtex] [web] [doi] [abstract]
The Power of Properties: Uncovering the Influential Factors in Emotion Classification.
International Conference on Pattern Recognition and Artificial Intelligence (ICPRAI). 2024.
[bibtex] [web] [doi] [abstract]
Facial expression-based human emotion recognition is a critical research area in psychology and medicine. State-of-the-art classification performance is only reached by end-to-end trained neural networks. Nevertheless, such black-box models lack transparency in their decisionmaking processes, prompting efforts to ascertain the rules that underlie classifiers’ decisions. Analyzing single inputs alone fails to expose systematic learned biases. These biases can be characterized as facial properties summarizing abstract information like age or medical conditions. Therefore, understanding a model’s prediction behaviorrequires an analysis rooted in causality along such selected properties. We demonstrate that up to 91.25% of classifier output behavior changes are statistically significant concerning basic properties. Among those are age, gender, and facial symmetry. Furthermore, the medical usage of surface electromyography significantly influences emotion prediction. We introduce a workflow to evaluate explicit properties and their impact. These insights might help medical professionals select and apply classifiers regarding their specialized data and properties.
Tim Büchner, Sven Sickert, Gerd F. Volk, Christoph Anders, Joachim Denzler, Orlando Guntinas-Lichius:
Reducing the Gap Between Mimics and Muscles by Enabling Facial Feature Analysis during sEMG Recordings [Abstract].
Congress of the Confederation of European ORL-HNS. 2024.
[bibtex] [pdf] [web] [abstract]
Reducing the Gap Between Mimics and Muscles by Enabling Facial Feature Analysis during sEMG Recordings [Abstract].
Congress of the Confederation of European ORL-HNS. 2024.
[bibtex] [pdf] [web] [abstract]
Introduction: Surface electromyography (sEMG) is an effective technique for studying facial muscles. However, although it would be valuable, the simultaneous acquisition of 2D facial movement videos creates incompatibilities with analysis methodologies because the sEMG electrodes and wires obstruct part of the face. The present study overcame these limitations using machine learning mechanisms to make the sEMG electrodes disappear artificially (artificial videos with removed electrodes). Material & Methods: We recorded 36 probands (18-67 years, 17 male, 19 female) and measured their muscular activity using two sEMG schematics [1], [2], totaling 60 electrodes attached to the face [3]. Each proband mimicked the six basic emotions four times randomly, guided by an instructional video. Minimal Change CycleGANs were used to make reconstruction videos without sEMG electrodes [4], [5]. Finally, the emotions expressed by the probands were classified with ResMaskNet [6]. Results: We quantitatively compared the sEMG data and reconstructed videos with reference recordings. The artificial videos achieved a Fréchet Inception Distance [10] score of 0.50 ± 0.74, while sEMG videos scored 10.46 ± 2.10, indicating high visual quality. With electrodes attached, we yield an emotion classification accuracy of 34 ± 10% (equivalent to two-category random guessing). Our approach obtained up to 83% accuracy for the removed electrodes. Conclusions: Our techniques and studies enable simultaneous analysis of muscle activity and facial movements. We reconstruct facial regions obstructed by electrodes and wires, preserving the underlying expression. Our data-driven and label-free approach enables established methods without further modifications. Supported by DFG DE-735/15-1 and DFG GU-463/12-1
Tim Büchner, Sven Sickert, Gerd F. Volk, Joachim Denzler, Orlando Guntinas-Lichius:
An Automatic, Objective Method to Measure and Visualize Volumetric Changes in Patients with Facial Palsy during 3D Video Recordings [Abstract].
95th Annual Meeting German Society of Oto-Rhino-Laryngology, Head and Neck Surgery e. V., Bonn. 2024.
[bibtex] [web] [doi] [abstract]
An Automatic, Objective Method to Measure and Visualize Volumetric Changes in Patients with Facial Palsy during 3D Video Recordings [Abstract].
95th Annual Meeting German Society of Oto-Rhino-Laryngology, Head and Neck Surgery e. V., Bonn. 2024.
[bibtex] [web] [doi] [abstract]
Introduction: Using grading systems, the severity of facial palsy is typically classified through static 2D images. These approaches fail to capture crucial facial attributes, such as the depth of the nasolabial fold. We present a novel technique that uses 3D video recordings to overcome this limitation. Our method automatically characterizes the facial structure, calculates volumetric dispari8es between the affected and contralateral side, and includes an intuitive visualization. Material: 35 patients (mean age 51 years, min. 25, max. 72; 7 ♂, 28 ♀) with unilateral chronic synkinetic facial palsy were enrolled. We utilized the 3dMD face system (3dMD LCC, Georgia, USA) to record their facial movements while they mimicked happy facial expressions four times. Each recording lasted 6.5 seconds, with a total of 140 videos. Results: We found a difference in volume between the neutral and happy expressions: 11.7 ± 9.1 mm3 and 13.73 ± 10.0 mm3 , respectively. This suggests that there is a higher level of asymmetry during movements. Our process is fully automa8c without human intervention, highlights the impacted areas, and emphasizes the differences between the affected and contralateral side. Discussion: Our data-driven method allows healthcare professionals to track and visualize patients' volumetric changes automatically, facilitating personalized treatments. It mitigates the risk of human biases in therapeutic evaluations and effectively transitions from static 2D images to dynamic 4D assessments of facial palsy state. Supported by DFG DE-735/15-1 and DFG GU-463/12-1
Tim Büchner, Sven Sickert, Gerd F. Volk, Martin Heinrich, Joachim Denzler, Orlando Guntinas-Lichius:
Measuring and Visualizing Volumetric Changes Before and After 10-Day Biofeedback Therapy in Patients with Synkinetic Facial Palsy Using 3D Video Recordings [Abstract].
Congress of the Confederation of European ORL-HNS. 2024.
[bibtex] [pdf] [web] [abstract]
Measuring and Visualizing Volumetric Changes Before and After 10-Day Biofeedback Therapy in Patients with Synkinetic Facial Palsy Using 3D Video Recordings [Abstract].
Congress of the Confederation of European ORL-HNS. 2024.
[bibtex] [pdf] [web] [abstract]
Introduction: The severity of facial palsy is typically assessed using grading systems based on 2D image analysis [1]. Thus,the full range of facial features, especially depth information, is neglected. The present study employed 3D video-based methods to measure volume disparities during dynamic facial movements [2], [3], overcoming prior limitations. In addition, impacted areas were highlighted on the scan for an intuitive visualization. Material & Methods: 35 patients (25-72 years; 28 female) with unilateral chronic synkinetic facial palsy were recorded with the 3dMD face system (3dMD LCC, Georgia, USA) at the beginning and end of 10-day biofeedback therapy focused on more symmetric facial expressions. The patients mimicked a happy facial expression four times, each recording lasting 6.5 seconds, totaling 280 videos. We used the Curvature of Radial Curves (CORC) [2] as a dense face descriptor and followed our previous method [3] to estimate the volume changes. Results: We found a reduced volume difference between contralateral and paretic side during the happy expression at therapy beginning (13.73 ± 10.0 mm3) and end (12.79 ± 10.3 mm 3). The neutral face remained unchanged in the ranges of 11.77-12.07 mm 3. This indicated a lower asymmetry during movements after therapy and could be used as an objective measurement during training for a successful therapy. Conclusions: Our data-driven method enables tracking and visualizing volume disparities between the paretic and contralateral sides. We reduce human bias during evaluation, personalize treatment, and shift 2D image assessments of facial palsy to dynamic 4D evaluations. Supported by DFG DE-735/15-1 and DFG GU-463/12-1
Yuxuan Xie, Tim Büchner, Lukas Schuhmann, Orlando Guntinas-Lichius, Joachim Denzler:
Unsupervised Learning of Eye State Prototypes for Semantically Rich Blinking Detection.
Digital Health & Informatics Innovations for Sustainable Health Care Systems. Pages 1607-1611. 2024.
[bibtex] [pdf] [web] [doi] [code]
Unsupervised Learning of Eye State Prototypes for Semantically Rich Blinking Detection.
Digital Health & Informatics Innovations for Sustainable Health Care Systems. Pages 1607-1611. 2024.
[bibtex] [pdf] [web] [doi] [code]
2023
Tim Büchner, Orlando Guntinas-Lichius, Joachim Denzler:
Improved Obstructed Facial Feature Reconstruction for Emotion Recognition with Minimal Change CycleGANs.
Advanced Concepts for Intelligent Vision Systems (Acivs). Pages 262-274. 2023. Best Paper Award
[bibtex] [web] [doi] [abstract]
Improved Obstructed Facial Feature Reconstruction for Emotion Recognition with Minimal Change CycleGANs.
Advanced Concepts for Intelligent Vision Systems (Acivs). Pages 262-274. 2023. Best Paper Award
[bibtex] [web] [doi] [abstract]
Comprehending facial expressions is essential for human interaction and closely linked to facial muscle understanding. Typically, muscle activation measurement involves electromyography (EMG) surface electrodes on the face. Consequently, facial regions are obscured by electrodes, posing challenges for computer vision algorithms to assess facial expressions. Conventional methods are unable to assess facial expressions with occluded features due to lack of training on such data. We demonstrate that a CycleGAN-based approach can restore occluded facial features without fine-tuning models and algorithms. By introducing the minimal change regularization term to the optimization problem for CycleGANs, we enhanced existing methods, reducing hallucinated facial features. We reached a correct emotion classification rate up to 90\% for individual subjects. Furthermore, we overcome individual model limitations by training a single model for multiple individuals. This allows for the integration of EMG-based expression recognition with existing computer vision algorithms, enriching facial understanding and potentially improving the connection between muscle activity and expressions.
Tim Büchner, Sven Sickert, Gerd F. Volk, Christoph Anders, Orlando Guntinas-Lichius, Joachim Denzler:
Let’s Get the FACS Straight - Reconstructing Obstructed Facial Features.
International Conference on Computer Vision Theory and Applications (VISAPP). Pages 727-736. 2023.
[bibtex] [pdf] [web] [doi] [abstract]
Let’s Get the FACS Straight - Reconstructing Obstructed Facial Features.
International Conference on Computer Vision Theory and Applications (VISAPP). Pages 727-736. 2023.
[bibtex] [pdf] [web] [doi] [abstract]
The human face is one of the most crucial parts in interhuman communication. Even when parts of the face are hidden or obstructed the underlying facial movements can be understood. Machine learning approaches often fail in that regard due to the complexity of the facial structures. To alleviate this problem a common approach is to fine-tune a model for such a specific application. However, this is computational intensive and might have to be repeated for each desired analysis task. In this paper, we propose to reconstruct obstructed facial parts to avoid the task of repeated fine-tuning. As a result, existing facial analysis methods can be used without further changes with respect to the data. In our approach, the restoration of facial features is interpreted as a style transfer task between different recording setups. By using the CycleGAN architecture the requirement of matched pairs, which is often hard to fullfill, can be eliminated. To proof the viability of our approach, we compare our reconstructions with real unobstructed recordings. We created a novel data set in which 36 test subjects were recorded both with and without 62 surface electromyography sensors attached to their faces. In our evaluation, we feature typical facial analysis tasks, like the computation of Facial Action Units and the detection of emotions. To further assess the quality of the restoration, we also compare perceptional distances. We can show, that scores similar to the videos without obstructing sensors can be achieved.
Tim Büchner, Sven Sickert, Gerd F. Volk, Orlando Guntinas-Lichius, Joachim Denzler:
From Faces To Volumes - Measuring Volumetric Asymmetry in 3D Facial Palsy Scans.
International Symposium on Visual Computing (ISVC). Pages 121-132. 2023. Best Paper Award
[bibtex] [web] [doi] [abstract]
From Faces To Volumes - Measuring Volumetric Asymmetry in 3D Facial Palsy Scans.
International Symposium on Visual Computing (ISVC). Pages 121-132. 2023. Best Paper Award
[bibtex] [web] [doi] [abstract]
The research of facial palsy, a unilateral palsy of the facial nerve, is a complex field of study with many different causes and symptoms. Even modern approaches to evaluate the facial palsy state rely mainly on stills and 2D videos of the face and rarely on 3D information. Many of these analysis and visualization methods require manual intervention, which is time-consuming and error-prone. Moreover, existing approaches depend on alignment algorithms or Euclidean measurements and consider only static facial expressions. Volumetric changes by muscle movement are essential for facial palsy analysis but require manual extraction. Our proposed method extracts a heuristic unilateral volumetric description for dynamic expressions from 3D scans. Accurate positioning of 3D landmarks, problematic for facial palsy, is automated by adapting existing methods. Additionally, we visualize the primary areas of volumetric disparity by projecting them onto the face. Our approach substantially minimizes human intervention simplifying the clinical routine and interaction with 3D scans. The proposed pipeline can potentially more effectively analyze and monitor patient treatment progress.
Tim Büchner, Sven Sickert, Roland Graßme, Christoph Anders, Orlando Guntinas-Lichius, Joachim Denzler:
Using 2D and 3D Face Representations to Generate Comprehensive Facial Electromyography Intensity Maps.
International Symposium on Visual Computing (ISVC). Pages 136-147. 2023.
[bibtex] [web] [doi] [code] [abstract]
Using 2D and 3D Face Representations to Generate Comprehensive Facial Electromyography Intensity Maps.
International Symposium on Visual Computing (ISVC). Pages 136-147. 2023.
[bibtex] [web] [doi] [code] [abstract]
Electromyography (EMG) is a method to measure muscle activity. Physicians also use EMG to study the function of facial muscles through intensity maps (IMs) to support diagnostics and research. However, many existing visualizations neglect proper anatomical structures and disregard the physical properties of EMG signals. Especially the variance of facial structures between people complicates the generalization of IMs, which is crucial for their correct interpretation. In our work, we overcome these issues by introducing a pipeline to generate anatomically correct IMs for facial muscles. An IM generation algorithm is proposed based on a template model incorporating custom surface EMG schemes and combining them with a projection method to highlight the IMs on the patient's face in 2D and 3D. We evaluate the generated and projected IMs based on their correct projection quality for six base emotions on several subjects. These visualizations deepen the understanding of muscle activity areas and indicate that a holistic view of the face could be necessary to understand facial muscle activity. Medical experts can use our approach to study the function of facial muscles and to support diagnostics and therapy.
2022
Gabriel Meincke, Johannes Krauß, Maren Geitner, Dirk Arnold, Anna-Maria Kuttenreich, Valeria Mastryukova, Jan Beckmann, Wengelawit Misikire, Tim Büchner, Joachim Denzler, Orlando Guntinas-Lichius, Gerd F. Volk:
Surface Electrostimulation Prevents Denervated Muscle Atrophy in Facial Paralysis: Ultrasound Quantification [Abstract].
Abstracts of the 2022 Joint Annual Conference of the Austrian (ÖGBMT), German (VDE DGBMT) and Swiss (SSBE) Societies for Biomedical Engineering, including the 14th Vienna International Workshop on Functional Electrical Stimulation. 67 (s1) : pp. 542. 2022.
[bibtex] [doi] [abstract]
Surface Electrostimulation Prevents Denervated Muscle Atrophy in Facial Paralysis: Ultrasound Quantification [Abstract].
Abstracts of the 2022 Joint Annual Conference of the Austrian (ÖGBMT), German (VDE DGBMT) and Swiss (SSBE) Societies for Biomedical Engineering, including the 14th Vienna International Workshop on Functional Electrical Stimulation. 67 (s1) : pp. 542. 2022.
[bibtex] [doi] [abstract]
Sparse evidence of the potentialities of surface stimulation (ES) for preventing muscle atrophy in patients with acute or chronic facial palsy have been published so far. Especially studies addressing objective imaging methods for paralysis quantification are currently required. Facial muscles as principal target of ES can be directly quantified via ultrasound, a swiftly feasible imaging method. Our study represents one of the few systematic evaluations of this approach within patients with complete unilateral facial paralysis. Methods A well-established ultrasound protocol for the quantification of area and grey levels was used to evaluate therapeutical effects on patients with facial paralysis using ES. Only patients with complete facial paralysis confirmed by needleelectromyography were included. Individual ES parameters were set during the first visit and confirmed/adapted every month thereafter. At each visit patients additionally underwent facial needle-electromyography to rule out reinnervation as well as ultrasound imaging of 7 facial and 2 chewing muscles. Results In total 15 patients were recruited (medium 53 years, min. 25, max. 78; 8 female, 7 male). They underwent ES for a maximum of 1 year without serious adverse events. All patients were able to follow the ES protocol. First results in the assessment of ultrasound imaging already indicate that electrically stimulated paralytic muscles do not experience any further cross-sectional area decrease in comparison to the contralateral side. Non-stimulated muscles do not provide significant changes. Similar effects on grey levels currently remain to be assessed to draw further conclusions. Conclusion ES is supposed to decelerate the process of atrophy of facial muscles in patients with complete facial paralysis. Thus, the muscular cross-sectional area does not seem to aggravate during the period of electrostimulation within sonographic assessment. This demonstrates the benefit of ES regarding the facial muscle atrophy in patients with complete facial paralysis.
Johannes Krauß, Gabriel Meincke, Maren Geitner, Dirk Arnold, Anna-Maria Kuttenreich, Valeria Mastryukova, Jan Beckmann, Wengelawit Misikire, Tim Büchner, Joachim Denzler, Orlando Guntinas-Lichius, Gerd F. Volk:
Optical Quantification of Surface Electrical Stimulation to Prevent Denervation Muscle Atrophy in 15 Patients with Facial Paralysis [Abstract].
Abstracts of the 2022 Joint Annual Conference of the Austrian (ÖGBMT), German (VDE DGBMT) and Swiss (SSBE) Societies for Biomedical Engineering, including the 14th Vienna International Workshop on Functional Electrical Stimulation. 67 (s1) : pp. 541. 2022.
[bibtex] [doi] [abstract]
Optical Quantification of Surface Electrical Stimulation to Prevent Denervation Muscle Atrophy in 15 Patients with Facial Paralysis [Abstract].
Abstracts of the 2022 Joint Annual Conference of the Austrian (ÖGBMT), German (VDE DGBMT) and Swiss (SSBE) Societies for Biomedical Engineering, including the 14th Vienna International Workshop on Functional Electrical Stimulation. 67 (s1) : pp. 541. 2022.
[bibtex] [doi] [abstract]
Few studies showing therapeutic potentials of electrical stimulation (ES) of the facial surface in patients with facial palsy have been published so far. Not only muscular atrophy of the facial muscles but facial disfigurement represents the main issue for patient well-being. Therefore, objective methods are required to detect ES effects on facial symmetry within patients with complete unilateral facial paralysis. Methods Only patients with one-sided peripheral complete facial paralysis confirmed by needle-EMG were included and underwent ES twice a day for 20 min until the event of reinnervation or for a maximum of 1 year. ES-parameters were set during the first visit and confirmed/adapted every month thereafter. At each visit, patients underwent needle-electromyography, 2D-fotographic documentation and 3D-videos. Whereas 2D-images allow Euclidean measurements of facial symmetry, 3D-images permit detection of metrical divergence within both sides of face. Using the 2D and 3D-fotographic documentation, we aim to prove that ES is able to prevent muscular atrophy in patients with facial paralysis. Results In total 15 patients were recruited (medium 53 years, min. 25, max. 78; 8 female, 7 male). They underwent ES for a maximum of one year without serious adverse events. All patients were able to follow the ES protocol. On a short term, we could detect positive effects of ES on the extent of asymmetry of mouth corners. Preliminary results show positive effects leading to improvement of symmetry of denervated faces. Conclusion A positive short-term effect of ES on facial symmetry in patients with total paralysis could be shown. The improvement of optical appearance during ES has a positive effect on patients' satisfaction and resembles a promising, easily accessible marker for facial muscles in facial paralysis patients. Improving facial symmetry by ES might also be linked to preventing facial muscle atrophy. Acknowledgements Sponsored by DFG GU-463/12-1 and IZKF
Sven Festag, Gideon Stein, Tim Büchner, Maha Shadaydeh, Joachim Denzler, Cord Spreckelsen:
Outcome Prediction and Murmur Detection in Sets of Phonocardiograms by a Deep Learning-Based Ensemble Approach.
Computing in Cardiology (CinC). Pages 1-4. 2022.
[bibtex] [pdf] [doi] [abstract]
Outcome Prediction and Murmur Detection in Sets of Phonocardiograms by a Deep Learning-Based Ensemble Approach.
Computing in Cardiology (CinC). Pages 1-4. 2022.
[bibtex] [pdf] [doi] [abstract]
We, the team UKJ_FSU, propose a deep learning system for the prediction of congenital heart diseases. Our method is able to predict the clinical outcomes (normal, abnormal) of patients as well as to identify heart murmur (present, absent, unclear) based on phonocardiograms recorded at different auscultation locations. The system we propose is an ensemble of four temporal convolutional networks with identical topologies, each specialized in identifying murmurs and predicting patient outcome from a phonocardiogram taken at one specific auscultation location. Their intermediate outputs are augmented by the manually ascertained patient features such as age group, sex, height, and weight. The outputs of the four networks are combined to form a single final decision as demanded by the rules of the George B. Moody PhysioNet Challenge 2022. On the first task of this challenge, the murmur detection, our model reached a weighted accuracy of 0.567 with respect to the validation set. On the outcome prediction task (second task) the ensemble led to a mean outcome cost of 10679 on the same set. By focusing on the clinical outcome prediction and tuning some of the hyper-parameters only for this task, our model reached a cost score of 12373 on the official test set (rank 13 of 39). The same model scored a weighted accuracy of 0.458 regarding the murmur detection on the test set (rank 37 of 40).
Tim Büchner, Sven Sickert, Gerd F. Volk, Orlando Guntinas-Lichius, Joachim Denzler:
Automatic Objective Severity Grading of Peripheral Facial Palsy Using 3D Radial Curves Extracted from Point Clouds.
Challenges of Trustable AI and Added-Value on Health. Pages 179-183. 2022.
[bibtex] [web] [doi] [code] [abstract]
Automatic Objective Severity Grading of Peripheral Facial Palsy Using 3D Radial Curves Extracted from Point Clouds.
Challenges of Trustable AI and Added-Value on Health. Pages 179-183. 2022.
[bibtex] [web] [doi] [code] [abstract]
Peripheral facial palsy is an illness in which a one-sided ipsilateral paralysis of the facial muscles occurs due to nerve damage. Medical experts utilize visual severity grading methods to estimate this damage. Our algorithm-based method provides an objective grading using 3D point clouds. We extract from static 3D recordings facial radial curves to measure volumetric differences between both sides of the face. We analyze five patients with chronic complete peripheral facial palsy to evaluate our method by comparing changes over several recording sessions. We show that our proposed method allows an objective assessment of facial palsy.