Interpretable Machine Learning

Automatic scanplane detection with decision explanation using SonoNet

The rapid developments and early successes of deep learning technology in medical image analysis (and other fields) have caused the field to prioritize predictive accuracy over human integration. However, it is becoming increasingly clear that black box models are unlikely to find clinical acceptance, can lead to ethical problems when neither the patient nor the doctor understand the reasoning behind a prediction, and are difficult to certify. Our research goals in this branch are to develop adequate explanations for predictions of deep learning models, and perhaps more importantly, to build inherently interpretable models rooted in prior clinical knowledge.

Selected publications

Machine Learning in Medical Image Analysis
Machine Learning in Medical Image Analysis
Bridging the gap between AI and clinical practice