mmlearn.modules.encoders.clip.HFCLIPTextEncoderWithProjection

class HFCLIPTextEncoderWithProjection(model_name_or_path, pretrained=True, use_all_token_embeddings=False, freeze_layers=False, freeze_layer_norm=True, peft_config=None, model_config_kwargs=None)[source]

Bases: Module

Wrapper around the CLIPTextModelWithProjection from HuggingFace.

Parameters:
  • model_name_or_path (str) – The huggingface model name or a local path from which to load the model.

  • pretrained (bool, default=True) – Whether to load the pretrained weights or not.

  • use_all_token_embeddings (bool, default=False) – Whether to use all token embeddings for the text. If False the first token embedding will be used.

  • freeze_layers (Union[int, float, list[int], bool], default=False) – Whether to freeze layers of the model and which layers to freeze. If True, all model layers are frozen. If it is an integer, the first N layers of the model are frozen. If it is a float, the first N percent of the layers are frozen. If it is a list of integers, the layers at the indices in the list are frozen.

  • freeze_layer_norm (bool, default=True) – Whether to freeze the layer normalization layers of the model.

  • peft_config (Optional[PeftConfig], optional, default=None) – The configuration from the peft library to use to wrap the model for parameter-efficient finetuning.

Warns:

UserWarning – If both peft_config and freeze_layers are set. The peft_config will override the freeze_layers setting.

Methods

Attributes

forward(inputs)[source]

Run the forward pass.

Parameters:

inputs (dict[str, Any]) – The input data. The input_ids will be expected under the Modalities.TEXT key.

Returns:

The text embeddings. Will be a tuple with a single element.

Return type:

tuple[torch.Tensor]