Processing math: 100%

Horizontal Federated Learning

Suggest an Edit

Reading time: 3 min

As outlined in The Different Flavors of Federated Learning, Horizontal FL considers the setting where i=1,,N clients each hold a distributed training dataset, Di, on their local compute environment. Each of the datasets share the same feature and label spaces. The goal of Horizontal FL is to train a high-performing model (or models) using all of the training data, {Di}Ni=1, residing on each of the clients in the system.

Horizontal FL
Feature spaces are shared between clients, enabling access to more unique training data points.

In an Horizontal FL system, some fundamental elements are generally present. In most cases, communication and computation between the server and clients is broken into iterations known as server rounds. Typically, the number of such rounds is simply specified as a hyper-parameter, T>0. During each round, the server chooses a subset of all possible clients of size mN to participate in that round. Note that one may choose to include all clients or a proper subset thereof. These clients perform some kind of training using their local datasets and send the results of that training back to the server. The contents of these "training results" varies depending on the method used, but often include the model parameters after local training.

After receiving the training results for the clients participating in the round, the server performs some kind of aggregation, combining the training results together. These combined results are returned to the clients for the next round of training. In most cases, the results are communicated to all clients, rather than just the subset that participated in the round.

This process skeleton is summarized in the algorithm below. The specifics of how each of the high-level steps outlined in the algorithms function depends on the exact Horizontal FL algorithm being used. There are also variations of such algorithms that modify or add to the basic framework below.

Horizontal FL Algorithm Outline

This section of the book is organized as follows:

Each of the chapters covers a different aspect of Horizontal FL and provides deeper details on the inner workings of the various algorithms. In Vanilla FL, the foundational Horizontal FL algorithms are discussed. In Robust Global FL, extensions to these foundational algorithms are detailed. Such extensions aim to improve things like convergence and robustness to heterogeneous data challenges common in FL applications while still producing a single generalizable model. Finally, Personalized FL discusses methods for robust and effective methods for training individual models per client that still benefit from the global perspective of other clients. The end result is a set of models individually optimized to perform well on each clients unique distributions.


Contributors: