Trusted Decentralized Federated Learning
Authors : Anousheh Gholami, Nariman Torkzaban, and John S. Baras
Conference : 2022 IEEE Consumer Communications & Networking Conference (CCNC 2022) Workshop: 1st International Workshop on Secure FunctiON ChAining and FederaTed AI (SONATAI'22) pp. 1-6 , Virtual
Date: January 08 - January 11, 2022
Federated learning (FL) has received significant attention from both academia and industry, as an emerging paradigm for building machine learning models in a communication-efficient and privacy preserving manner. It enables potentially a massive number of resource constrained agents (e.g. mobile devices and IoT devices) to train a model by a repeated process of local training on agents and centralized model aggregation on a central server. To overcome the singlepoint-of-failure and scalability issues of the traditional FL frameworks, decentralized (server-less) FL has been proposed. In a decentralized FL setting, agents implement consensus techniques by exchanging local model updates. Despite bypassing the direct exchange of raw data between the collaborating agents, this scheme is still vulnerable to various security and privacy threats such as data poisoning attack. In this paper, we propose trust as a metric to measure the trustworthiness of the FL agents and thereby enhance the
security of the FL training. We first elaborate on trust as a security metric by presenting a mathematical framework for trust computation and aggregation within a multi-agent system. We then discuss how this framework can be incorporated within a decentralized FL setup introducing the trusted decentralized FL algorithm. Finally, we validate our theoretical findings by means of numerical experiments.