About me
I am a PhD student in the Department of Data Science at the Hertie Institute for Artificial Intelligence in Brain Health at the University of Tübingen and an IMPRS-IS scholar since June 2021. I work on Interpretable Machine Learning for Safe Medical Diagnostics and am keenly interested in research that can safely and securely benefit patient health or user well-being.
Bio
I hold a bachelor’s in Computer Mathematics (BSc) and a master’s degree (MSc) in Computer Science from the University of Dschang, Cameroon. After another Master’s (MSc) in Applied mathematics at the African Institute for Mathematical Science (AIMS) in South Africa, I joined the International Max Planck Research School for Intelligent Systems (IMPRS-IS) where I am doing my PhD in Machine Learning for Medical Image Analysis under the supervision of Prof. Dr. Philipp Berens.
Research interests
My research focuses on explainable deep learning models for clinical diagnosis, and particularly on inherently interpretable models for medical image analysis with applications in ophthalmology. In addition to interpretability, I am also interested in Computer Vision, Vision Language Models (VLMs), clinical ethics, fairness, resource-efficient deep learning models, and the deployment of deep learning models.
Teaching Assistant (TA)
- Machine Learning I (Winter semester 2023/2024)
- Machine Learning for Medical Image Analysis (Winter semester 2022/2023)
Latest News
- Jun 2025 - New accepted paper - Our paper entitled Prototype-Guided and Lightweight Adapters for Inherent Interpretation and Generalisation in Federated Learning has been accepted at MICCAI 2025.
- Jun 2025 - New preprint alert - Clinically Interpretable Deep Learning via Sparse BagNets for Epiretinal Membrane and Related Pathology Detection. In this work, we trained the sparse BagNet model, an inherently interpretable deep learning model, to detect ERM in OCT images, showing that it performed on par with a comparable black-box model and generalised well to external data. Through a clinical user study with ophthalmologists, we showed that the visual explanations readily provided by the sparse BagNet model for its decisions are well-aligned with clinical expertise. This study proposes potential directions for clinical implementation of the sparse BagNet model to guide clinical decisions in practice.
- May 2025 - New journal paper - An Inherently Interpretable AI model improves Screening Speed and Accuracy for Early Diabetic Retinopathy was published at Plos Digital Health. In this work, we evaluate our self-explainable model, sparseBagNet, on a benchmark of 10 retinal fundus datasets for early detection of diabetic retinopathy. Despite being inherently interpretable, sparseBagNet performs comparably to traditional black-box models while improving ophthalmologists’ diagnostic accuracy by ~20% and reducing decision time by ~ 25% for challenging cases.
- May 2025 - New preprint alert - Soft-CAM: Making black box models self-explainable for high-stakes decisions. In this work, we present Soft-CAM, a simple yet effective method to transform black-box CNNs into inherently self-explainable models, with extensive validation on medical imaging datasets. Unlike post-hoc methods, Soft-CAM modifies the network architecture itself: we remove the global average pooling layer and replace the fully connected classifier with a spatially aware class-evidence layer—implemented via a convolutional layer. This design preserves spatial information and produces class-specific evidence maps that directly support the model’s predictions. We further enhance interpretability by applying an ElasticNet regularization on the evidence maps to refine the explanations. Quantitative comparisons show that Soft-CAM outperforms existing class activation map (CAM) methods, all without compromising classification performance.
-
April 2025 - New preprint alert - A Hybrid Fully Convolutional CNN-Transformer Model for Inherently Interpretable Medical Image Classification. In this work, we introduce the first inherently self-explainable hybrid sequential CNN-Transformer model for medical image analysis. Our architecture integrates a CNN feature extractor with dual-resolution convolutional windowed self-attention to capture both low- and high-frequency features while preserving spatial information. A convolutional classification head produces class-specific evidence maps that directly support the model’s predictions. To enhance interpretability, we apply a Lasso penalty to these maps, encouraging sparsity and improving the clarity and faithfulness of the explanations.
- November 2024 - 7th place in the MIDRC XAI challenge - Our inherently interpretable Dense BagNet model achieved seventh place in the MIDRC XAI Challenge, a MICCAI-endorsed competition titled “XAI: Decoding AI Decisions for Pneumonia on Chest Radiographs.”
- October 2024 - 2 papers accepted at MICCAI 2024 -. We presented our accepted papers This actually looks like that: Proto-BagNets for local and global interpretability-by-design and Interpretable-by-design Deep Survival Analysis for Disease Progression Modeling at MICCAI 2024
- September 2024 - I joined the Africa Science Entrepreneurship Program for a year as a Mentor
- July 2024 - Participation at the UCL Medical Image Computing Summer School (MedICSS) hosted at UCL (London)
- Jun 2024 - Speakers at IndabaX Cameroon 2024
- Jun 2024 - Two papers accepted at MICCAI 2024
- This actually looks like that: Proto-BagNets for local and global interpretability-by-design
- Interpretable-by-design Deep Survival Analysis for Disease Progression Modeling
- March 2024 - Tutor during the Foundational Methods in Data Science Training School in Kigali organized by the AIMS Research and Innovation Center (AIMS-RIC). I gave a talk on “Trustworthy ML for medical image diagnosis”.
- August 2023 - Invitation to the TV show “Ladies of Another View” to talk about the challenges of Artificial Intelligence: watch
- July 2023 - Oral presentation at the Medical Imaging with Deep Learning (MIDL-23) conference - I presented our paper on trustworthy AI for image analysis titled Sparse Activations for Interpretable Disease Grading: presentation | slide | conference webpage
- March 2023 - Participation with an oral presentation at the Bern Interpretable AI Symposium (BIAS) in Medical Image Analysis. I presented the result of our paper Sparse Activations for Interpretable Disease Grading accepted at MIDL 2023: presentation | slide
- April 2023 - Participation with an oral presentation at the From Theory to Practice (T2P) workshop in Data Science organized by Quantum Leap Africa (QLA). I gave a presentation on interpretable AI models for medical image diagnosis: slide
- October 2022 - Volunteer trainer at the “Collaborative Science Symposium with clinical data for healthcare professionals” in Yobe State (Nigeria) organized by the Biomedical Science Research and Training Centre (BioRTC) Yobe State University