PhD Student (E13 TV-L, 65%, m/f/d) in Interpretable Machine Learning for Medical Image Analysis
Developing deep learning methods that incorporate clinical knowledge
The introduction of deep learning has revolutionised medical image analysis along with many other fields. Yet, despite the fact that neural network technology is now ubiquitous and deep learning algorithms are routinely used in many every day applications, they are rarely employed in practical clinical applications. A major reason for this is that standard deep learning methods are black boxes and cannot explain their reasoning. This causes clinicians to lack trust in the technology, is at the root of ethical and legal issues when neither the doctor nor the patient understand an algorithmic decision, and complicates certification of machine learning based products thus preventing commercialisation and transfer into clinical practice.
The aim of this project is to develop inherently interpretable machine learning techniques by combining neural network based learning algorithms with clinically known patterns and, in particular, clinically known reasoning. Explicitly using this a-priori known information is likely to facilitate learning, will lead to more robust representations, and ultimately may form a new foundation for clinically accepted deep learning models. The successful candidate will perform research at the intersection between medical applications and machine learning. A particular methodological focus will be on combining known representations with learned representations and further processing them with ideas from machine reasoning.
What we are looking for
You have a strong academic background and hold an M.Sc., or equivalent degree in a quantitative discipline such as computer science, physics, mathematics, statistics, electrical engineering, or biomedical engineering. You are self-driven, curious, and enjoy analytical thinking. You have a strong motivation to do machine learning research as well as a keen interest to solve real-world clinical problems. Ideally, you have prior experience with machine learning, and strong programming skills in Python. Prior experience working with medical imaging data is a plus, but not required.
Note: A B.Sc. degree is not sufficient to qualify for this position.
What we offer
This project will be conducted in the Machine Learning in Medical Image Analysis (MLMIA) group under the supervision of Dr. Christian Baumgartner. The group is located in the machine learning building of the University of Tübingen, together with the Cluster of Excellence “Machine Learning: New Perspectives for Science”, which the group is part of, and Tübingen AI Center. As such, the successful candidate will be embedded in an extraordinarily vibrant machine learning community in which regular exchanges of ideas and collaborations are common. The MLMIA group also greatly values the direct exchange with clinical partners from the University Hospital Tübingen.
Tübingen is a scenic university town on the Neckar river in South-Western Germany. The quality of life is exceptionally high and the atmosphere is diverse, inclusive, and most locals speak English. Tübingen offers excellent research opportunities due to the University, four Max Planck institutes, the University Hospital, and Europe’s largest AI research consortium. You can find out more about Tübingen here: https://www.tuebingen.de/en/.
How to apply
Please send a cover letter, your CV, copies of your university transcripts, and any additional information to support your application to Christian Baumgartner (firstname.lastname@example.org). If you have any questions about the position, please do not hesitate to contact Christian directly. The university seeks to raise the number of women in research and teaching and therefore urges qualified women scientists to apply for these positions. Equally qualified applicants with disabilities will be given preference. The employment will be carried out by the central administration of the University of Tübingen. Please submit your application by May 2nd, 2021.