fl4health.utils.data_generation module

class SyntheticFedProxDataset(num_clients, temperature=1.0, input_dim=60, output_dim=10, samples_per_client=1000)[source]

Bases: ABC

__init__(num_clients, temperature=1.0, input_dim=60, output_dim=10, samples_per_client=1000)[source]

Abstract base class to support synthetic dataset generation in the style of the original FedProx paper.

Paper link: https://arxiv.org/abs/1812.06127 Reference code: https://github.com/litian96/FedProx/tree/master/data/synthetic_1_1

NOTE: In the implementations here, all clients receive the same number of samples. In the original FedProx setup, they are sampled using a power law.

Parameters:
  • num_clients (int) – Number of datasets (one per client) to generate

  • temperature (float, optional) – temperature used for the softmax mapping to labels. Defaults to 1.0.

  • input_dim (int, optional) – dimension of the input features for the synthetic dataset. Default is as in the FedProx paper. Defaults to 60.

  • output_dim (int, optional) – dimension of the output labels for the synthetic dataset. These are one-hot encoding labels. Default is as in the FedProx paper. Defaults to 10.

  • samples_per_client (int, optional) – Number of samples to generate in each client’s dataset. Defaults to 1000.

construct_covariance_matrix()[source]

This function generations the covariance matrix used in generating input features. It is fixed across all datasets. It is a diagonal matrix with diagonal entries x_{j, j} = j^{-1.2}, where j starts at 1 in this notation. The matrix is of dimension input_dim x input_dim

Returns:

Covariance matrix for generation of input features.

Return type:

torch.Tensor

generate()[source]

Based on the class parameters, generate a list of synthetic TensorDatasets, one for each client.

Returns:

Synthetic datasets for each client.

Return type:

list[TensorDataset]

abstract generate_client_tensors()[source]

Method to be implemented determining how to generate the tensors in the subclasses. Each of the subclasses uses the affine mapping, but the parameters for how that affine mapping is setup are different and determined in this function.

Returns:

input and output tensors for each of the clients.

Return type:

list[tuple[torch.Tensor, torch.Tensor]]

map_inputs_to_outputs(x, W, b)[source]

This function maps features x to a label y as done in the original paper. The first stage is the affine transformation hat{y} = (1/T)*(Wx + b). Then y = softmax(hat{y}). Sampling from the distribution, we then one hot encode the resulting label sample.

NOTE: This procedure differs slightly from that of the original paper, which simply took a one hot on the softmax distribution. The current strategy allows for a bit more label stochasticity.

Parameters:
  • x (torch.Tensor) – The input features to be mapped to output labels. Shape is (dataset size, input_dim)

  • W (torch.Tensor) – The linear transformation matrix. Shape is (output_dim, input_dim)

  • b (torch.Tensor) – The bias in the linear transformation. Shape is (output_dim, 1)

Returns:

The labels associated with each of the inputs. The shape is (dataset size, output_dim)

Return type:

torch.Tensor

class SyntheticIidFedProxDataset(num_clients, temperature=1.0, input_dim=60, output_dim=10, samples_per_client=1000)[source]

Bases: SyntheticFedProxDataset

__init__(num_clients, temperature=1.0, input_dim=60, output_dim=10, samples_per_client=1000)[source]

IID Synthetic dataset generator modeled after the implementation in the original FedProx paper. See Appendix C.1 in the paper link below for additional details. The IID generation code is based strictly on the description in the appendix for IID dataset generation.

Paper link: https://arxiv.org/abs/1812.06127

NOTE: This generator ends up with fairly skewed labels in generation. That is, many of the clients will not have representations of all the labels. This has been verified as also occurring in the reference code above and is not a bug.

Parameters:
  • num_clients (int) – Number of datasets (one per client) to generate

  • temperature (float, optional) – temperature used for the softmax mapping to labels. Defaults to 1.0.

  • input_dim (int, optional) – dimension of the input features for the synthetic dataset. Default is as in the FedProx paper. Defaults to 60.

  • output_dim (int, optional) – dimension of the output labels for the synthetic dataset. These are one-hot encoding labels. Default is as in the FedProx paper. Defaults to 10.

  • samples_per_client (int, optional) – Number of samples to generate in each client’s dataset. Defaults to 1000.

generate_client_tensors()[source]

For IID generation, this function is simple, as we need not sample any parameters per client for use in generation, as these are all shared across clients.

Returns:

Set of input and output tensors for each client.

Return type:

list[tuple[torch.Tensor, torch.Tensor]]

get_input_output_tensors()[source]

As described in the original FedProx paper (Appendix C.1), the features are all sampled from a centered multidimensional normal distribution with diagonal covariance matrix shared across clients.

Returns:

X and Y for the clients synthetic dataset. Shape of X is

n_samples x input dimension. Shape of Y is n_samples x output_dim and is one-hot encoded

Return type:

tuple[torch.Tensor, torch.Tensor]

class SyntheticNonIidFedProxDataset(num_clients, alpha, beta, temperature=1.0, input_dim=60, output_dim=10, samples_per_client=1000)[source]

Bases: SyntheticFedProxDataset

__init__(num_clients, alpha, beta, temperature=1.0, input_dim=60, output_dim=10, samples_per_client=1000)[source]

NON-IID Synthetic dataset generator modeled after the implementation in the original FedProx paper. See Section 5.1 in the paper link below for additional details. The non-IID generation code is modeled after the code housed in the github link below as well.

Paper link: https://arxiv.org/abs/1812.06127 Reference code: https://github.com/litian96/FedProx/tree/master/data/synthetic_1_1

NOTE: This generator ends up with fairly skewed labels in generation. That is, many of the clients will not have representations of all the labels. This has been verified as also occurring in the reference code above and is not a bug.

The larger alpha and beta are, the more heterogeneous the clients data is. The larger alpha is, the more “different” the affine transformations are from one another. The larger beta is, the larger the variance in the centers of the input features.

Parameters:
  • num_clients (int) – Number of datasets (one per client) to generate

  • alpha (float) – This is the standard deviation for the mean (u_k), drawn from a centered normal distribution, which is used to generate the elements of the affine transformation components W, b.

  • beta (float) – This is the standard deviation for each element of the multidimensional mean (v_k), drawn from a centered normal distribution, which is used to generate the elements of the input features for x ~ N(B_k, Sigma)

  • temperature (float, optional) – temperature used for the softmax mapping to labels. Defaults to 1.0.

  • input_dim (int, optional) – dimension of the input features for the synthetic dataset. Default is as in the FedProx paper. Defaults to 60.

  • output_dim (int, optional) – dimension of the output labels for the synthetic dataset. These are one-hot encoding labels. Default is as in the FedProx paper. Defaults to 10.

  • samples_per_client (int, optional) – Number of samples to generate in each client’s dataset. Defaults to 1000.

generate_client_tensors()[source]

For the Non-IID synthetic generator, this function uses the values of alpha and beta to sample the parameters that will be used to generate the synthetic datasets on each client. For each client, beta is used to sample a mean value from which to generate the input features, alpha is used to sample a mean for the transformation components of W and b. Note that sampling occurs for EACH client independently. The larger alpha and beta the larger the variance in these values, implying higher probability of heterogeneity.

Returns:

Set of input and output tensors for each client.

Return type:

list[tuple[torch.Tensor, torch.Tensor]]

get_input_output_tensors(mu, v, sigma)[source]

This function takes values for the center of elements in the affine transformation elements (mu), the centers feature each of the input feature dimensions (v), and the covariance of those features (sigma) and produces the input, output tensor pairs with the appropriate dimensions

Parameters:
  • mu (float) – The mean value from which each element of W and b are to be drawn ~ mathcal{N}(mu, 1)

  • v (torch.Tensor) – This is assumed to be a 1D tensor of size self.input_dim and represents the mean for the multivariate normal from which to draw the input x

  • sigma (torch.Tensor) – This is assumed to be a 2D tensor of shape (input_dim, input_dim) and represents the covariance matrix Sigma of the multivariate normal from which to draw the input x. It should be a diagonal matrix as well.

Returns:

X and Y for the clients synthetic dataset. Shape of X is

n_samples x input dimension. Shape of Y is n_samples x output_dim and is one-hot encoded

Return type:

tuple[torch.Tensor, torch.Tensor]