Thèse de Thomas Petellat


Sujet :
Multi-modal explainable machine learning for exploring consciousness recovery of coma patients

Date de début : 01/10/2025
Date de fin (estimée) : 01/10/2028

Encadrant : Stefan Duffner

Résumé :

While consciousness is currently seen as the result of processes in the brain, ordinary human experience is in fact embedded in a web of causal relations that link the brain to the body and the environment (Bayne et al., 2020). Embodied cognition is a naturalistic theory in which consciousness is associated with a dynamic interaction between brain, body and environment (BBE) (Thompson et Varela, 2001). Indeed, from an evolutionary point of view, the nervous system appears to be dedicated
to perceptual and motor processes that allow interaction with the environment (Thompson et Varela, 2001). One way to better understand consciousness is to study its disorders and recovery. Indeed, when such patients recover, they go through different clinical states that are characterized by the recovery of arousal and/or awareness and by the recovery of BBE interactions. Coma is a state of unconsciousness in which patients cannot be awakened. Those who recover could transit through a disorder of consciousness (DOC).
We have video, ECG and hd-EEG data from 20 healthy subjects and 60 DOC patients, which will allow the development of more precise and robust machine learning models. The data already been anonymised and the intern will only work on feature vectors that do not contain any sensitive and private information.
Different learning strategies and models will be developed to deal with the large amount of noise in general and the imbalance between the amount of relevant data compared to irrelevant data. The combination of these different modalities using new deep learning models as well as the adaptation of our existing models for unsupervised learning multi-variate time series (Berlemont et al. 2017) will allow us to further analyse complex correlations and co-occurences of characteristics and, by focusing on explainable methods and results (explainable AI), give insights into BBE interactions and further give rise to new neuroscientific hypotheses.