About me
I am currently a postdoc researcher at the Hertie Institute for AI in Brain Health, where my work focuses on trustworthy AI for medical applications. My research interests center on building methods that can safely and reliably improve patient care and support user well-being.
I conducted my PhD on Interpretable Machine Learning for Safe Medical Diagnostics in the Department of Data Science at the Hertie Institute for Artificial Intelligence in Brain Health at the University of Tübingen. During my PhD studies, I was also a scholar of the International Max Planck Research School for Intelligent Systems (IMPRS-IS).
Bio
I hold a bachelor’s degree in Computer Mathematics (BSc) and a master’s degree (MSc) in Computer Science from the University of Dschang in Cameroon. I then completed a second Master’s degree (MSc) in Applied Mathematics at the African Institute for Mathematical Science (AIMS) in South Africa.
Following this, I joined the International Max Planck Research School for Intelligent Systems (IMPRS-IS) where I pursued my PhD in Machine Learning for Medical Image Analysis under the supervision of Prof. Dr. Philipp Berens.
Research interests
My research focuses on explainable deep learning for clinical diagnosis, with emphasis on inherently interpretable models for medical image analysis. My current work includes applications in ophthalmology and radiology.
Beyond interpretability, I am also interested in Vision Language Models (VLMs), clinical ethics, fairness in AI, resource-efficient deep learning models, the deployment of AI systems in clinical practice, and ensuring the robustness of models in real-world settings.
Teaching Assistant (TA)
- Machine Learning I (Winter semester 2023/2024)
- Machine Learning for Medical Image Analysis (Winter semester 2022/2023)
Student supervision
MSc students:
- Frederik Spieß (Sep 2025 - March 2026) – MSc Medical Informatics (University of Tübingen)
- Anna Schäfer (Oct 2025 - April 2026) – (University of Tübingen)
- Olivier Kanamugire (2024) – Msc in Mathematical Sciences (AIMS Rwanda)
Latest News
- February 2026 - New accepted paper - Our paper SoftCAM: Making black box models self-explainable for high-stakes decisions has been accepted at MIDL 2026. In this work, we present SoftCAM, a simple yet effective method to transform black-box CNNs into inherently self-explainable models, with extensive validation on medical imaging datasets. Unlike post-hoc methods, Soft-CAM modifies the network architecture itself: we remove the global average pooling layer and replace the fully connected classifier with a spatially aware class-evidence layer—implemented via a convolutional layer. This design preserves spatial information and produces class-specific evidence maps that directly support the model’s predictions. We further enhance interpretability by applying an ElasticNet regularization on the evidence maps to refine the explanations. Quantitative comparisons show that Soft-CAM outperforms existing class activation map (CAM) methods, all without compromising classification performance.
- Febuary 2026 - Top 100: IRCAI Global AI for SDGs Index - Our project on interpretable DR screening recently published in PLoS Digital Health has been selected as one of the worldwide top 100 AI solutions for driving progress on the SDGs. The project description is available here.
- January 2026 - MIDL Young Researcher Board - I joined the MIDL Young Researcher Board as a member and will serve in this role for the next two years.
- August 2025 - New accepted paper - Our paper entitled A Hybrid Fully Convolutional CNN-Transformer Model for Inherently Interpretable Disease Detection from Retinal Fundus Images has been accepted at the 8th IMIMIC Workshop on Interpretability in Medical Image Computing at MICCAI 2025 for oral presentation . In this work, we introduced the first inherently self-explainable hybrid sequential CNN-Transformer model for medical image analysis. Our architecture integrates a CNN feature extractor with dual-resolution convolutional windowed self-attention to capture both low- and high-frequency features while preserving spatial information. A convolutional classification head produces class-specific evidence maps that directly support the model’s predictions. To enhance interpretability, we apply a Lasso penalty to these maps, encouraging sparsity and improving the clarity and faithfulness of the explanations.
- Jun 2025 - New accepted paper - Our paper entitled Prototype-Guided and Lightweight Adapters for Inherent Interpretation and Generalisation in Federated Learning has been accepted at MICCAI 2025.
- Jun 2025 - New preprint alert - Clinically Interpretable Deep Learning via Sparse BagNets for Epiretinal Membrane and Related Pathology Detection. In this work, we trained the sparse BagNet model, an inherently interpretable deep learning model, to detect ERM in OCT images, showing that it performed on par with a comparable black-box model and generalised well to external data. Through a clinical user study with ophthalmologists, we showed that the visual explanations readily provided by the sparse BagNet model for its decisions are well-aligned with clinical expertise. This study proposes potential directions for clinical implementation of the sparse BagNet model to guide clinical decisions in practice.
- May 2025 - New journal paper - An Inherently Interpretable AI model improves Screening Speed and Accuracy for Early Diabetic Retinopathy was published at Plos Digital Health. In this work, we evaluate our self-explainable model, sparseBagNet, on a benchmark of 10 retinal fundus datasets for early detection of diabetic retinopathy. Despite being inherently interpretable, sparseBagNet performs comparably to traditional black-box models while improving ophthalmologists’ diagnostic accuracy by ~20% and reducing decision time by ~ 25% for challenging cases.
- November 2024 - 7th place in the MIDRC XAI challenge - Our inherently interpretable Dense BagNet model achieved seventh place in the MIDRC XAI Challenge, a MICCAI-endorsed competition titled “XAI: Decoding AI Decisions for Pneumonia on Chest Radiographs.”
- October 2024 - 2 papers accepted at MICCAI 2024 -. We presented our accepted papers This actually looks like that: Proto-BagNets for local and global interpretability-by-design and Interpretable-by-design Deep Survival Analysis for Disease Progression Modeling at MICCAI 2024
- September 2024 - I joined the Africa Science Entrepreneurship Program for a year as a Mentor
- July 2024 - Participation at the UCL Medical Image Computing Summer School (MedICSS) hosted at UCL (London)
- Jun 2024 - Speakers at IndabaX Cameroon 2024
- Jun 2024 - Two papers accepted at MICCAI 2024
- This actually looks like that: Proto-BagNets for local and global interpretability-by-design
- Interpretable-by-design Deep Survival Analysis for Disease Progression Modeling
- March 2024 - Tutor during the Foundational Methods in Data Science Training School in Kigali organized by the AIMS Research and Innovation Center (AIMS-RIC). I gave a talk on “Trustworthy ML for medical image diagnosis”.
- August 2023 - Invitation to the TV show “Ladies of Another View” to talk about the challenges of Artificial Intelligence: watch
- July 2023 - Oral presentation at the Medical Imaging with Deep Learning (MIDL-23) conference - I presented our paper on trustworthy AI for image analysis titled Sparse Activations for Interpretable Disease Grading: presentation | slide | conference webpage
- March 2023 - Participation with an oral presentation at the Bern Interpretable AI Symposium (BIAS) in Medical Image Analysis. I presented the result of our paper Sparse Activations for Interpretable Disease Grading accepted at MIDL 2023: presentation | slide
- April 2023 - Participation with an oral presentation at the From Theory to Practice (T2P) workshop in Data Science organized by Quantum Leap Africa (QLA). I gave a presentation on interpretable AI models for medical image diagnosis: slide
- October 2022 - Volunteer trainer at the “Collaborative Science Symposium with clinical data for healthcare professionals” in Yobe State (Nigeria) organized by the Biomedical Science Research and Training Centre (BioRTC) Yobe State University
Reviewing
- MICCAI (4+4+4): 2024, 2025, 2025
- MIDL (1+4): 2025, 2026
- FIAMI-MICCAI Workshop (3): 2025
- IMIMIC-MICCAI Workshop (3): 2025
- Plos Digital Health (1)