fl4health.utils.random module¶
- generate_hash(length=8)[source]¶
Generates unique hash used as id for client. NOTE: This generation is unaffected by setting of random seeds.
- restore_random_state(random_state, numpy_state, torch_state)[source]¶
Restore the state of the random number generators for Python, NumPy, and PyTorch. This will allow you to restore the state of the random number generators to a previously saved state.
- save_random_state()[source]¶
Save the state of the random number generators for Python, NumPy, and PyTorch. This will allow you to restore the state of the random number generators at a later time.
- set_all_random_seeds(seed=42, use_deterministic_torch_algos=False, disable_torch_benchmarking=False)[source]¶
Set seeds for python random, numpy random, and pytorch random. It also offers the option to force pytorch to use deterministic algorithm for certain methods and layers see: https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html) for more details. Finally, it allows one to disable cuda benchmarking, which can also affect the determinism of pytorch training outside of random seeding. For more information on reproducibility in pytorch see: https://pytorch.org/docs/stable/notes/randomness.html
NOTE: If the use_deterministic_torch_algos flag is set to True, you may need to set the environment variable CUBLAS_WORKSPACE_CONFIG to something like :4096:8, to avoid CUDA errors. Additional documentation may be found here: https://docs.nvidia.com/cuda/cublas/index.html#results-reproducibility
- Parameters:
seed (int | None, optional) – The seed value to be used for random number generators. Default is 42. Seed setting will no-op if the seed is explicitly set to None
use_deterministic_torch_algos (bool, optional) – Whether or not to set torch.use_deterministic_algorithms to True. Defaults to False.
disable_torch_benchmarking (bool, optional) – Whether to explicitly disable cuda benchmarking in torch processes. Defaults to False.
- Return type: