mmlearn.conf

Hydra/Hydra-zen-based configurations.

Module attributes

external_store = external_store {'modules/optimizers': ['Adadelta', 'Adagrad', 'Adam', 'Adamax', 'AdamW', 'ASGD', 'LBFGS', 'NAdam', 'RAdam', 'RMSprop', 'Rprop', 'SGD', 'SparseAdam'], 'modules/lr_schedulers': ['StepLR', 'MultiStepLR', 'ExponentialLR', 'CosineAnnealingLR', 'CyclicLR', 'OneCycleLR', 'ReduceLROnPlateau', 'LinearLR', 'PolynomialLR', 'CosineAnnealingWarmRestarts'], 'modules/losses': ['L1Loss', 'NLLLoss', 'NLLLoss2d', 'PoissonNLLLoss', 'GaussianNLLLoss', 'KLDivLoss', 'MSELoss', 'BCELoss', 'BCEWithLogitsLoss', 'HingeEmbeddingLoss', 'MultiLabelMarginLoss', 'SmoothL1Loss', 'HuberLoss', 'SoftMarginLoss', 'CrossEntropyLoss', 'MultiLabelSoftMarginLoss', 'CosineEmbeddingLoss', 'MarginRankingLoss', 'MultiMarginLoss', 'TripletMarginLoss', 'TripletMarginWithDistanceLoss', 'CTCLoss'], 'dataloader/sampler': ['RandomSampler', 'SequentialSampler', 'DistributedSampler'], 'trainer/callbacks': ['BatchSizeFinder', 'Checkpoint', 'DeviceStatsMonitor', 'EarlyStopping', 'BackboneFinetuning', 'BaseFinetuning', 'GradientAccumulationScheduler', 'LambdaCallback', 'LearningRateFinder', 'LearningRateMonitor', 'ModelSummary', 'OnExceptionCheckpoint', 'BasePredictionWriter', 'ProgressBar', 'RichProgressBar', 'TQDMProgressBar', 'ModelPruning', 'RichModelSummary', 'SpikeDetection', 'ThroughputMonitor', 'Timer', 'ModelCheckpoint']}

A custom ZenStore object that will immediately add entries to Hydra’s global config store as soon as they are registered. Use this as a decorator for newly-defined configurable functions/classes outside the main mmlearn package.

Functions

register_external_modules(module, group, name=None, package=None, provider=None, base_cls=None, ignore_cls=None, ignore_prefix=None, **kwargs_for_builds)[source]

Add all classes in an external module to a ZenStore.

Parameters:
  • module (ModuleType) – The module to add classes from.

  • group (str) – The config group to add the classes to.

  • name (Optional[str], optional, default=None) – The name to give to the dynamically-generated configs. If None, the class name is used.

  • package (Optional[str], optional, default=None) – The package to add the configs to.

  • provider (Optional[str], optional, default=None) – The provider to add the configs to.

  • base_cls (Optional[type], optional, default=None) – The base class to filter classes by. The base class is also excluded from the configs.

  • ignore_cls (Optional[list[type]], optional, default=None) – list of classes to ignore.

  • ignore_prefix (Optional[str], optional, default=None) – Ignore classes whose names start with this prefix.

  • kwargs_for_builds (Any) – Additional keyword arguments to pass to hydra_zen.builds.

Return type:

None

Classes

DataLoaderConf

Configuration for the dataloader.

DatasetConf

Configuration template for the datasets.

JobType

Type of the job.

MMLearnConf

Top-level configuration for mmlearn experiments.

class DataLoaderConf(train=<factory>, val=<factory>, test=<factory>)[source]

Configuration for the dataloader.

test: Any

Configuration for the test dataloader.

train: Any

Configuration for the training dataloader.

val: Any

Configuration for the validation dataloader.

class DatasetConf(train=None, val=None, test=None)[source]

Configuration template for the datasets.

test: Optional[Any] = None

Configuration for the test dataset.

train: Optional[Any] = None

Configuration for the training dataset.

val: Optional[Any] = None

Configuration for the validation dataset.

class JobType(value)[source]

Type of the job.

eval = 'eval'
train = 'train'
class MMLearnConf(defaults=<factory>, experiment_name='???', job_type=JobType.train, seed=None, datasets=<factory>, dataloader=<factory>, task='???', trainer=<factory>, tags=<factory>, resume_from_checkpoint=None, strict_loading=True, torch_compile_kwargs=<factory>, hydra=<factory>)[source]

Top-level configuration for mmlearn experiments.

dataloader: DataLoaderConf

Configuration for the dataloaders.

datasets: DatasetConf

Configuration for the datasets.

defaults: list[Any]
experiment_name: str = '???'

Name of the experiment. This must be specified for any experiment to run.

hydra: HydraConf

Hydra configuration.

job_type: JobType = 'train'

Type of the job.

resume_from_checkpoint: Optional[Path] = None

Path to the checkpoint to resume training from.

seed: Optional[int] = None

Seed for the random number generators. This is set for Python, Numpy and PyTorch, including the workers in PyTorch Dataloaders.

strict_loading: bool = True

Whether to strictly enforce loading of model weights i.e. strict=True in load_from_checkpoint().

tags: Optional[list[str]]

Tags for the experiment. This is useful for wandb logging.

task: Any = '???'

Configuration for the task. This is required to run any experiment.

torch_compile_kwargs: dict[str, Any]

Configuration for torch.compile. These are essentially the same as the arguments for torch.compile().

trainer: Any

Configuration for the trainer. The options here are the same as in Trainer

external_store = external_store {'modules/optimizers': ['Adadelta', 'Adagrad', 'Adam', 'Adamax', 'AdamW', 'ASGD', 'LBFGS', 'NAdam', 'RAdam', 'RMSprop', 'Rprop', 'SGD', 'SparseAdam'], 'modules/lr_schedulers': ['StepLR', 'MultiStepLR', 'ExponentialLR', 'CosineAnnealingLR', 'CyclicLR', 'OneCycleLR', 'ReduceLROnPlateau', 'LinearLR', 'PolynomialLR', 'CosineAnnealingWarmRestarts'], 'modules/losses': ['L1Loss', 'NLLLoss', 'NLLLoss2d', 'PoissonNLLLoss', 'GaussianNLLLoss', 'KLDivLoss', 'MSELoss', 'BCELoss', 'BCEWithLogitsLoss', 'HingeEmbeddingLoss', 'MultiLabelMarginLoss', 'SmoothL1Loss', 'HuberLoss', 'SoftMarginLoss', 'CrossEntropyLoss', 'MultiLabelSoftMarginLoss', 'CosineEmbeddingLoss', 'MarginRankingLoss', 'MultiMarginLoss', 'TripletMarginLoss', 'TripletMarginWithDistanceLoss', 'CTCLoss'], 'dataloader/sampler': ['RandomSampler', 'SequentialSampler', 'DistributedSampler'], 'trainer/callbacks': ['BatchSizeFinder', 'Checkpoint', 'DeviceStatsMonitor', 'EarlyStopping', 'BackboneFinetuning', 'BaseFinetuning', 'GradientAccumulationScheduler', 'LambdaCallback', 'LearningRateFinder', 'LearningRateMonitor', 'ModelSummary', 'OnExceptionCheckpoint', 'BasePredictionWriter', 'ProgressBar', 'RichProgressBar', 'TQDMProgressBar', 'ModelPruning', 'RichModelSummary', 'SpikeDetection', 'ThroughputMonitor', 'Timer', 'ModelCheckpoint']}

A custom ZenStore object that will immediately add entries to Hydra’s global config store as soon as they are registered. Use this as a decorator for newly-defined configurable functions/classes outside the main mmlearn package.

register_external_modules(module, group, name=None, package=None, provider=None, base_cls=None, ignore_cls=None, ignore_prefix=None, **kwargs_for_builds)[source]

Add all classes in an external module to a ZenStore.

Parameters:
  • module (ModuleType) – The module to add classes from.

  • group (str) – The config group to add the classes to.

  • name (Optional[str], optional, default=None) – The name to give to the dynamically-generated configs. If None, the class name is used.

  • package (Optional[str], optional, default=None) – The package to add the configs to.

  • provider (Optional[str], optional, default=None) – The provider to add the configs to.

  • base_cls (Optional[type], optional, default=None) – The base class to filter classes by. The base class is also excluded from the configs.

  • ignore_cls (Optional[list[type]], optional, default=None) – list of classes to ignore.

  • ignore_prefix (Optional[str], optional, default=None) – Ignore classes whose names start with this prefix.

  • kwargs_for_builds (Any) – Additional keyword arguments to pass to hydra_zen.builds.

Return type:

None