Thesis of Artur Vieira-Pereira


Subject:
Protecting cloud-edge continuum against privacy and robustness threats

Start date: 01/12/2025
End date (estimated): 01/12/2028

Advisor: Sara Bouchenak

Summary:

Federated learning (FL) is a promising paradigm that is gaining grip in the context of privacy-preserving machine learning for edge computing systems [1]. Thanks to FL, several data owners called clients (e.g., organizations in cross-silo FL) can collaboratively train a model on their private data, without having to send their raw data to external service providers. FL was rapidly adopted in several thriving applications such as digital healthcare [2], that is generating the world’s largest volume of data [3]. Decentralized Learning (DL) goes further by providing serverless federated learning, where the data are kept at the clients and no server is needed. Thus, DL involves distributed and decentralized protocols to allow clients to build a global model [4,5,6].
Although DL is a first step towards privacy by keeping the data local to each client, this is not sufficient since the model parameters shared by DL is vulnerable to privacy attack [7], as shown in a line of recent literature [8]. Furthermore, DL is more vulnerable to malicious behaviour from clients that may inject poisoned information in data and models, resulting in a misbehaving and non-robust DL models. Recent studies show that robustness and privacy in DL may compete; handling them independently – as done usually – may have negative side-effects on each other.
Therefore, there is a need for a novel multi-objective approach for FL robustness and protection against privacy threats. This project tackles this challenge and aims to precisely handle the issues raised at the intersection of DL model privacy, robustness and utility, through: (i) Novel DL protocols; (ii) A multi-objective approach to trade-off privacy, robustness and utility, these objectives being antagonistic; (ii) Applying these techniques to DL in edge-cloud continuum systems.