Thesis of Théo Jaunet

Deep Learning Interpretability with Visual Analytics: Exploring Reasoning and Bias Exploitation

Defense date: 16/05/2022

Advisor: Christian Wolf
Codirection: Romain Vuillemot


In the past years, Artificial Intelligence has been introduced into our everyday lives. However, to learn a decision process, AI must assimilate a huge amount of data, which makes the reasons for these decisions obscure. This has led to the birth of the eXplicable AI (XAI) research field, which aims at analyzing these models by exploring their reasoning capabilities and bias exploitation. This thesis is dedicated to the creation of visual analysis tools built to empower designers of these models to interpret their decisions, and thus eventually improve them. In particular, this thesis focuses on three different tasks and models: first, answering natural language questions about images with transformers; second, automatic navigation in an environment with deep reinforcement learning; and third, self-localization with convolutional regression models. All these visualization tools are open-source, and prototypes are available online (e.g.

Mme Hudelot CélineProfesseur(e)MICS, CentraleSupélecExaminateur​(trice)
Mr Auber DavidProfesseur(e)LaBRI, Université BordeauxExaminateur​(trice)
Mme Liu ShixiaProfesseur(e)Tsinghua UniversityExaminateur​(trice)
Mr Strobelt HendrikChercheurIBM Research, MIT-IBM Watson AI LabExaminateur​(trice)
Mr Chen Liming , ProfesseurProfesseur(e)LIRIS, Ecole Centrale de LyonExaminateur​(trice)
Mr Christian WolfChercheurNaver Labs EuropeDirecteur(trice) de thèse
Mr Vuillemot RomainMaître de conférenceLIRIS, Ecole Centrale de LyonCo-directeur (trice)