The federated learning (FL) paradigm aims to distribute the computational burden of the training process among several computation units, usually called agents or workers, while preserving private local training datasets. This is generally achieved by resorting to a server-worker architecture where agents iteratively update local models and communicate local parameters to a server that aggregates and returns them to the agents. However, the presence of adversarial agents, which may intentionally exchange malicious parameters or may have corrupted local datasets, can jeopardize the FL process. Therefore, we propose selective trimmed average (SETA), which is a resilient algorithm to cope with the undesirable effects of a number of misbehaving agents in the global model. SETA is based on properly filtering and combining the exchanged parameters. We mathematically prove that the proposed algorithm is resilient against data and local model poisoning attacks. Most resilient methods presented so far in the literature assume that a trusted server is in hand. In contrast, our algorithm works both in server-worker and shared memory architectures, where the latter excludes the necessity of a trusted server. The theoretical findings are corroborated through numerical results on MNIST dataset and on multiclass weather dataset (MWD).

Selective Trimmed Average: A Resilient Federated Learning Algorithm With Deterministic Guarantees on the Optimality Approximation

Franceschelli, Mauro
Ultimo
2024-01-01

Abstract

The federated learning (FL) paradigm aims to distribute the computational burden of the training process among several computation units, usually called agents or workers, while preserving private local training datasets. This is generally achieved by resorting to a server-worker architecture where agents iteratively update local models and communicate local parameters to a server that aggregates and returns them to the agents. However, the presence of adversarial agents, which may intentionally exchange malicious parameters or may have corrupted local datasets, can jeopardize the FL process. Therefore, we propose selective trimmed average (SETA), which is a resilient algorithm to cope with the undesirable effects of a number of misbehaving agents in the global model. SETA is based on properly filtering and combining the exchanged parameters. We mathematically prove that the proposed algorithm is resilient against data and local model poisoning attacks. Most resilient methods presented so far in the literature assume that a trusted server is in hand. In contrast, our algorithm works both in server-worker and shared memory architectures, where the latter excludes the necessity of a trusted server. The theoretical findings are corroborated through numerical results on MNIST dataset and on multiclass weather dataset (MWD).
2024
Adversarial attacks; distributed optimization; multiagent systems; resilient federated learning (FL)
File in questo prodotto:
File Dimensione Formato  
Selective_Trimmed_Average_A_Resilient_Federated_Learning_Algorithm_With_Deterministic_Guarantees_on_the_Optimality_Approximation.pdf

accesso aperto

Descrizione: Articolo Principale
Tipologia: versione editoriale (VoR)
Dimensione 2.2 MB
Formato Adobe PDF
2.2 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11584/388685
Citazioni
  • ???jsp.display-item.citation.pmc??? 0
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 1
social impact