mmlearn.datasets.processors.masking.IJEPAMaskGenerator

class IJEPAMaskGenerator(input_size=(224, 224), patch_size=16, min_keep=10, allow_overlap=False, enc_mask_scale=(0.85, 1.0), pred_mask_scale=(0.15, 0.2), aspect_ratio=(0.75, 1.0), nenc=1, npred=4)[source]

Bases: object

Generates encoder and predictor masks for preprocessing.

This class generates masks dynamically for batches of examples.

Parameters:
  • input_size (tuple[int, int], default=(224, 224)) – Input image size.

  • patch_size (int, default=16) – Size of each patch.

  • min_keep (int, default=10) – Minimum number of patches to keep.

  • allow_overlap (bool, default=False) – Whether to allow overlap between encoder and predictor masks.

  • enc_mask_scale (tuple[float, float], default=(0.85, 1.0)) – Scale range for encoder mask.

  • pred_mask_scale (tuple[float, float], default=(0.15, 0.2)) – Scale range for predictor mask.

  • aspect_ratio (tuple[float, float], default=(0.75, 1.0)) – Aspect ratio range for mask blocks.

  • nenc (int, default=1) – Number of encoder masks to generate.

  • npred (int, default=4) – Number of predictor masks to generate.

Methods

Attributes

__call__(batch_size=1)[source]

Generate encoder and predictor masks for a batch of examples.

Parameters:

batch_size (int, default=1) – The batch size for which to generate masks.

Returns:

A dictionary of encoder masks and predictor masks.

Return type:

dict[str, Any]

allow_overlap: bool = False
aspect_ratio: tuple[float, float] = (0.75, 1.0)
enc_mask_scale: tuple[float, float] = (0.85, 1.0)
input_size: tuple[int, int] = (224, 224)
min_keep: int = 10
nenc: int = 1
npred: int = 4
patch_size: int = 16
pred_mask_scale: tuple[float, float] = (0.15, 0.2)