atomgen.models.unimolplus module#

Implementation of the Uni-mol+ model with alterations to the original model.

class GaussianLayer(k=128, edge_types=1024)[source]#

Bases: Module

Gaussian pairwise positional embedding layer.

forward(x, edge_types)[source]#

Forward pass to compute the Gaussian pos. embeddings.

Return type:

Tensor

class InitialStructure2RelaxedEnergy(config)[source]#

Bases: TransformerPreTrainedModel

Transformer with an energy head on top for relaxed energy prediction.

forward(input_ids, coords, labels_energy=None, fixed=None, attention_mask=None)[source]#

Forward function call for the relaxed energy prediction model.

Return type:

Tuple[Optional[Tensor], Tensor]

class InitialStructure2RelaxedStructure(config)[source]#

Bases: TransformerPreTrainedModel

Transformer with an coordinate head on top for relaxed structure prediction.

forward(input_ids, coords, labels_coords=None, fixed=None, attention_mask=None)[source]#

Forward function call.

Initial structure to relaxed structure model.

Return type:

Tuple[Optional[Tensor], Tensor]

class InitialStructure2RelaxedStructureAndEnergy(config)[source]#

Bases: TransformerPreTrainedModel

Transformer with an coordinate and energy head.

forward(input_ids, coords, labels_coords=None, forces=None, total_energy=None, formation_energy=None, has_formation_energy=None, attention_mask=None)[source]#

Forward function call for the relaxed structure and energy model.

Return type:

Tuple[Tensor, Tuple[Tensor, Tensor]]

class ParallelBlock(dim, num_heads, mlp_ratio=4, dropout=0.0, k=128, op_hidden_dim=16, tr_hidden_dim=16, gradient_checkpointing=False)[source]#

Bases: Module

Parallel transformer block (MLP & Attention in parallel).

Based on:

‘Scaling Vision Transformers to 22 Billion Parameters` - https://arxiv.org/abs/2302.05442

Adapted from TIMM implementation.

forward(x, pos_embed, attention_mask=None)[source]#

Forward pass for the parallel block.

Return type:

Tuple[Tensor, Tensor]

class Structure2Energy(config)[source]#

Bases: TransformerPreTrainedModel

Transformer with an atom modeling head on top for masked atom modeling.

forward(input_ids, coords, forces=None, total_energy=None, formation_energy=None, has_formation_energy=None, attention_mask=None)[source]#

Forward function call for the structure to energy model.

Return type:

Tuple[Optional[Tensor], Tuple[Tensor, Optional[Tensor]]]

class Structure2EnergyAndForces(config)[source]#

Bases: TransformerPreTrainedModel

Transformer with an energy and forces head for energy and forces prediction.

forward(input_ids, coords, forces=None, total_energy=None, formation_energy=None, has_formation_energy=None, attention_mask=None)[source]#

Forward function call for the structure to energy and forces model.

Return type:

Tuple[Tensor, Tuple[Tensor, Tensor, Optional[Tensor]]]

class Structure2Forces(config)[source]#

Bases: TransformerPreTrainedModel

Transformer with a forces head on top for forces prediction.

forward(input_ids, coords, forces=None, total_energy=None, formation_energy=None, has_formation_energy=None, attention_mask=None)[source]#

Forward function call for the structure to forces model.

Return type:

Tuple[Tensor, Tuple[Tensor, Optional[Tensor]]]

class TransformerConfig(vocab_size=123, dim=768, num_heads=32, depth=12, mlp_ratio=1, k=128, op_hidden_dim=16, tr_hidden_dim=16, dropout=0.0, mask_token_id=0, pad_token_id=119, bos_token_id=120, eos_token_id=121, cls_token_id=122, **kwargs)[source]#

Bases: PretrainedConfig

Configuration of a UniMolPlusModel.

It is used to instantiate an UniMolPlus model according to the specified arguments.

model_type: str = 'transformer'#
class TransformerEncoder(config)[source]#

Bases: Module

UniMolPlus Transformer encoder.

The transformer encoder consists of a series of parallel blocks, each containing a multi-head self-attention mechanism and a feed-forward network.

forward(input_ids, coords, attention_mask=None)[source]#

Forward pass for the transformer encoder.

Return type:

Tuple[Tensor, Tensor]

class TransformerForCoordinateAM(config)[source]#

Bases: TransformerPreTrainedModel

Transformer with an atom coordinate head on top for coordinate denoising.

forward(input_ids, coords, labels_coords=None, fixed=None, attention_mask=None)[source]#

Forward function call for the coordinate atom modeling model.

Return type:

Tuple[Optional[Tensor], Tensor]

class TransformerForMaskedAM(config)[source]#

Bases: TransformerPreTrainedModel

Transformer with an atom modeling head on top for masked atom modeling.

forward(input_ids, coords, labels=None, fixed=None, attention_mask=None)[source]#

Forward function call for the masked atom modeling model.

Return type:

Tuple[Optional[Tensor], Tensor]

class TransformerModel(config)[source]#

Bases: TransformerPreTrainedModel

Transformer model for atom modeling.

forward(input_ids, coords, attention_mask=None)[source]#

Forward function call for the transformer model.

Return type:

Tensor

class TransformerPreTrainedModel(config, *inputs, **kwargs)[source]#

Bases: PreTrainedModel

Base class for all transformer models.

base_model_prefix = 'model'#
config_class#

alias of TransformerConfig

supports_gradient_checkpointing = True#