Ihab Asaad, M.Sc.

Address: | Computer Vision Group |
Department of Mathematics and Computer Science | |
Friedrich Schiller University of Jena | |
Ernst-Abbe-Platz 2 | |
07743 Jena | |
Germany | |
Phone: | +49 (0) 3641 9 46335 |
E-mail: | ihab (dot) asaad (at) uni-jena (dot) de |
Room: | 1224 |
Links: | GitHub |
Curriculum Vitae
since 2023 | PhD Student | |
Project: “Sensorized Surgery: Optically guided precision surgery by real-time | ||
AI-interpreted multimodal imaging with continuous sensory feedback.” | ||
Computer Vision Group, Friedrich Schiller University Jena | ||
2022 – 2023 | M.Sc. Signal and Image Processing Methods and Applications | |
Master Thesis: “Self-supervised Learning of Speech Representations, Application | ||
to Speech Inpainting” | ||
Grenoble Institute of Technology, Phelma, France | ||
2019 – 2022 | M.Sc. Control in Technical Systems | |
Master Thesis: “Development of a Control System for an Unmanned Aerial Vehicle | ||
of the Bicopter Type” | ||
Bauman Moscow State Technical University, Russia | ||
2013 – 2018 | B. Sc. Electronic Systems Engineering | |
Higher Institute for Applied Sciences and Technology, Syria |
Research Interests
- Medical Imaging and AI
- Human-Computer Interaction
Publications
2025
Ihab Asaad, Maha Shadaydeh, Joachim Denzler:
Gradient Extrapolation for Debiased Representation Learning.
2025.
[bibtex] [pdf] [doi] [abstract]
Gradient Extrapolation for Debiased Representation Learning.
2025.
[bibtex] [pdf] [doi] [abstract]
Machine learning classification models trained with empirical risk minimization (ERM) often inadvertently rely on spurious correlations. When absent in the test data, these unintended associations between non-target attributes and target labels lead to poor generalization. This paper addresses this problem from a model optimization perspective and proposes a novel method, Gradient Extrapolation for Debiased Representation Learning (GERNE), designed to learn debiased representations in both known and unknown attribute training cases. GERNE uses two distinct batches with different amounts of spurious correlations to define the target gradient as the linear extrapolation of two gradients computed from each batch's loss. It is demonstrated that the extrapolated gradient, if directed toward the gradient of the batch with fewer amount of spurious correlation, can guide the training process toward learning a debiased model. GERNE can serve as a general framework for debiasing with methods, such as ERM, reweighting, and resampling, being shown as special cases. The theoretical upper and lower bounds of the extrapolation factor are derived to ensure convergence. By adjusting this factor, GERNE can be adapted to maximize the Group-Balanced Accuracy (GBA) or the Worst-Group Accuracy. The proposed approach is validated on five vision and one NLP benchmarks, demonstrating competitive and often superior performance compared to state-of-the-art baseline methods.