Théorie

Weighting Schemes for One-Shot Federated Learning

Published on

Authors: Marie Garin, Theodoros Evgeniou, Nicolas Vayatis

This paper focuses on one-shot aggregation of statistical estimations made across disjoint data sources for federated learning, in the context of empirical risk minimization. We exploit the role of each local sample size for this problem to develop a new weighting scheme for one-shot federated learning. First, we provide upper bounds on the local errors and biases from which we derive an upper bound for the plain federated learning parameter. Then, by casting an optimization problem based on the bias-variance decomposition of the MSE, we develop a simple weighting scheme based only on the local sample sizes. The proposed procedure can be embedded in a wide variety of algorithms used for federated learning. Finally, we evaluate our procedure in the context of large-scale estimation of linear models with ridge regression and compare it to the typical choice of weights in federated learning. We observe that, due to unbalanced sample sizes across the data sources, the proposed weighting scheme outperforms the standard one and converges faster to the performance of a centralized estimator.