atomgen.models.tokengt module#

Implementation of the TokenGT model.

class InitialStructure2RelaxedEnergy(config)[source]#

Bases: TransformerPreTrainedModel

Transformer with an energy head on top for relaxed energy prediction.

forward(input_ids, coords, labels_energy=None, fixed=None, attention_mask=None)[source]#

Forward function call for the initial structure to relaxed energy model.

Return type:

Tuple[Optional[Tensor], Tensor]

class InitialStructure2RelaxedStructure(config)[source]#

Bases: TransformerPreTrainedModel

Transformer with an coordinate head on top for relaxed structure prediction.

forward(input_ids, coords, labels_coords=None, fixed=None, attention_mask=None)[source]#

Forward function call.

Initial structure to relaxed structure model.

Return type:

Tuple[Optional[Tensor], Tensor]

class InitialStructure2RelaxedStructureAndEnergy(config)[source]#

Bases: TransformerPreTrainedModel

Initial structure to relaxed structure and energy prediction model.

Transformer with an coordinate and energy head on top for relaxed structure and energy prediction.

forward(input_ids, coords, labels_coords=None, labels_energy=None, fixed=None, attention_mask=None)[source]#

Forward function call.

Initial structure to relaxed structure and energy model.

Return type:

Tuple[Tensor, Tuple[Tensor, Tensor]]

class ParallelBlock(dim, num_heads, mlp_ratio=4, dropout=0.0)[source]#

Bases: Module

Parallel transformer block.

forward(x, attention_mask=None)[source]#

Forward function call for the parallel transformer block.

Return type:

Tensor

class Structure2EnergyAndForces(config)[source]#

Bases: TransformerPreTrainedModel

Structure to energy and forces prediction model.

Transformer with an energy and forces head on top for energy and forces prediction.

forward(input_ids, coords, forces=None, total_energy=None, formation_energy=None, has_formation_energy=None, attention_mask=None, node_pe=None, edge_pe=None)[source]#

Forward function call for the structure to energy and forces model.

Return type:

Tuple[Tensor, Tuple[Tensor, Tensor, Optional[Tensor]]]

class TransformerConfig(vocab_size=123, dim=768, num_heads=12, depth=12, mlp_ratio=4, k=16, sigma=0.03, type_id_dim=64, dropout=0.0, mask_token_id=0, pad_token_id=119, bos_token_id=120, eos_token_id=121, cls_token_id=122, gradient_checkpointing=False, **kwargs)[source]#

Bases: PretrainedConfig

Configuration class to store the configuration of a TokenGT model.

class TransformerEncoder(config)[source]#

Bases: Module

Transformer encoder for atom modeling.

forward(input_ids, coords, node_pe, edge_pe, attention_mask=None)[source]#

Forward function call for the transformer encoder.

Return type:

Tensor

class TransformerForCoordinateAM(config)[source]#

Bases: TransformerPreTrainedModel

Transformer with an atom coordinate head on top for coordinate denoising.

forward(input_ids, coords, labels_coords=None, fixed=None, attention_mask=None)[source]#

Forward function call for the coordinate atom modeling model.

Return type:

Tuple[Optional[Tensor], Tensor]

class TransformerForMaskedAM(config)[source]#

Bases: TransformerPreTrainedModel

Transformer with an atom modeling head on top for masked atom modeling.

forward(input_ids, coords, labels=None, fixed=None, attention_mask=None)[source]#

Forward function call for the masked atom modeling model.

Return type:

Tuple[Optional[Tensor], Tensor]

class TransformerModel(config)[source]#

Bases: TransformerPreTrainedModel

Transformer model for atom modeling.

forward(input_ids, coords, attention_mask=None)[source]#

Forward function call for the transformer model.

Return type:

Tensor

class TransformerPreTrainedModel(config, *inputs, **kwargs)[source]#

Bases: PreTrainedModel

Base class for all transformer models.

base_model_prefix = 'model'#
config_class#

alias of TransformerConfig

supports_gradient_checkpointing = True#