12-month post-doc position at LIG-MRIM

Explainable medical screening from still images in partnership with an innovative start-up

Supervisors: Jean-Pierre Chevallet (UGA) and Georges Quénot (CNRS)
Contact: Jean-Pierre.Chevallet@imag.fr, Georges.Quenot@imag.fr
Location: Grenoble (France), Laboratoire Informatique de Grenoble
Starting from: beginning of 2021.
MRIM team web site: http://lig-mrim.imag.fr/
LIG web site: http://www.liglab.fr/

Keywords: Medical Image analysis, Deep Learning, Explainable AI, Computer Vision.


The MRIM research team, in collaboration with a start-up, is looking for a Post-doc specialized in AI applied to the recognition of objects in images for medical screening. The disease that interests us is manifested by visible clinical signs. The object recognition system will detect and analyze medical images in “standard” 2D photographs taken with consumer digital cameras (ex: Smartphone), which will be sent to a remote AI platform that will perform the image processing for medical screening.
You will work in the MRIM team that has expertise in the field of AI and Image Processing for Information Access. The project deals with a major international health concern with a strong innovative part that should lead to major publications in the medical domain. Through this project, you will have the opportunity to work on real medical data. This research program aims to demonstrate the feasibility and relevance of our digital approach in large-scale, general public screening at low cost.


The objective of this post-doc is to develop a system for making a medical screening from images taken using a standard color camera. Given a training collection of images annotated according to the presence or the absence of a pathology and its level of development if present, the system should be trained to be able make a first prediction on unseen images to predict whether the patient should see a doctor. The goal of the project is not only to produce a system able to do the pre-diagnosis task but also to provide explanations regarding how the conclusion was reached. For this, the system should be able to identify the types of elements or of attributes that it uses for making the decisions and how these are used for that, ideally in terms of visual clues and logical rules on them. The scientific aspects will be mostly related to the explainability part while classical deep learning-based methods will be used for the decision part. The start-up will provide the training and test images, as well as all the expertise related to the targeted pathology.

Profile & Skills

PhD in Computer Science and/or Applied Mathematics for Computer Science.

Strong knowledge in Machine Learning.
Good programming skills, experience of deep learning frameworks (ex: PyTorch, TensorFlow, etc.)
Image processing, Computer vision in theory and practice.

Laboratoire d'Informatique de Grenoble
Bâtiment IMAG
700 avenue Centrale
CS 40700, 38058 Grenoble Cedex 9 - France
Phone: +33 4 57 42 15 48
Group Leader: Georges QUÉNOT

Paper accepted for EDBT 2020 "Fairness in Online Jobs: A Case Study on TaskRabbit and Google", with S. Amer-Yahia, S. Elbassuoni, A. Ghizzawi, R. M. Borromeo , E. Hoareau and P. Mulhem.

Anuvabh Dutt defended brilliantly his PhD thesis on December 17th 2019 :

Le 2 décembre 2019, un article publié sur le blog BInaire sur le testing algorithmique.

Kodicare (continuous evaluation of web search engines) ANR International Research Project acccepted for funding. Cooperation with RSA Vienna and Qwant. 3 years, from 11/2019.

The paper "Quelques pas vers l'Honnêteté et l'Explicabilité de moteurs de recherche sur le Web." Philippe Mulhem, Lydie du Bousquet, Sara Lakah, was awarded Best paper of the CORIA 2019 conference,

We organized the 40th European Conference in Information Retrieval (ECIR) in march 2018