atomgen.models.modeling_atomformer module#

Implementation of the Atomformer model.

class AtomFormerForSystemClassification(config)[source]#

Bases: AtomformerPreTrainedModel

Atomformer with a classification head for system classification.

forward(input_ids, coords, labels=None, attention_mask=None, token_type_ids=None)[source]#

Forward function call for the structure to energy and forces model.

Return type:

Tuple[Optional[Tensor], Tensor]

class AtomformerEncoder(config)[source]#

Bases: Module

Atomformer encoder.

The transformer encoder consists of a series of parallel blocks, each containing a multi-head self-attention mechanism and a feed-forward network.

forward(input_ids, coords, attention_mask=None, token_type_ids=None)[source]#

Forward pass for the transformer encoder.

Return type:

Tuple[Tensor, Tensor]

class AtomformerForCoordinateAM(config)[source]#

Bases: AtomformerPreTrainedModel

Atomformer with an atom coordinate head on top for coordinate denoising.

forward(input_ids, coords, labels_coords=None, fixed=None, attention_mask=None)[source]#

Forward function call for the coordinate atom modeling model.

Return type:

Tuple[Optional[Tensor], Tensor]

class AtomformerForMaskedAM(config)[source]#

Bases: AtomformerPreTrainedModel

Atomformer with an atom modeling head on top for masked atom modeling.

forward(input_ids, coords, labels=None, fixed=None, attention_mask=None)[source]#

Forward function call for the masked atom modeling model.

Return type:

Tuple[Optional[Tensor], Tensor]

class AtomformerModel(config)[source]#

Bases: AtomformerPreTrainedModel

Atomformer model for atom modeling.

forward(input_ids, coords, attention_mask=None)[source]#

Forward function call for the transformer model.

Return type:

Tensor

class AtomformerPreTrainedModel(config, *inputs, **kwargs)[source]#

Bases: PreTrainedModel

Base class for all transformer models.

base_model_prefix = 'model'#
config_class#

alias of AtomformerConfig

supports_gradient_checkpointing = True#
class GaussianLayer(k=128, edge_types=1024)[source]#

Bases: Module

Gaussian pairwise positional embedding layer.

This layer computes the Gaussian positional embeddings for the pairwise distances between atoms in a molecule.

Taken from: https://github.com/microsoft/Graphormer/blob/main/graphormer/models/graphormer_3d.py

forward(x, edge_types)[source]#

Forward pass to compute the Gaussian pos. embeddings.

Return type:

Tensor

class InitialStructure2RelaxedEnergy(config)[source]#

Bases: AtomformerPreTrainedModel

Atomformer with an energy head on top for relaxed energy prediction.

forward(input_ids, coords, labels_energy=None, fixed=None, attention_mask=None)[source]#

Forward function call for the relaxed energy prediction model.

Return type:

Tuple[Optional[Tensor], Tensor]

class InitialStructure2RelaxedStructure(config)[source]#

Bases: AtomformerPreTrainedModel

Atomformer with an coordinate head on top for relaxed structure prediction.

forward(input_ids, coords, labels_coords=None, fixed=None, attention_mask=None)[source]#

Forward function call.

Initial structure to relaxed structure model.

Return type:

Tuple[Optional[Tensor], Tensor]

class InitialStructure2RelaxedStructureAndEnergy(config)[source]#

Bases: AtomformerPreTrainedModel

Atomformer with an coordinate and energy head.

forward(input_ids, coords, labels_coords=None, forces=None, total_energy=None, formation_energy=None, has_formation_energy=None, attention_mask=None)[source]#

Forward function call for the relaxed structure and energy model.

Return type:

Tuple[Tensor, Tuple[Tensor, Tensor]]

class ParallelBlock(dim, num_heads, mlp_ratio=4, dropout=0.0, k=128, gradient_checkpointing=False)[source]#

Bases: Module

Parallel transformer block (MLP & Attention in parallel).

Based on:

‘Scaling Vision Transformers to 22 Billion Parameters` - https://arxiv.org/abs/2302.05442

Adapted from TIMM implementation.

forward(x, pos_embed, attention_mask=None)[source]#

Forward pass for the parallel block.

Return type:

Tuple[Tensor, Tensor]

class Structure2Energy(config)[source]#

Bases: AtomformerPreTrainedModel

Atomformer with an atom modeling head on top for masked atom modeling.

forward(input_ids, coords, forces=None, total_energy=None, formation_energy=None, has_formation_energy=None, attention_mask=None)[source]#

Forward function call for the structure to energy model.

Return type:

Tuple[Optional[Tensor], Tuple[Tensor, Optional[Tensor]]]

class Structure2EnergyAndForces(config)[source]#

Bases: AtomformerPreTrainedModel

Atomformer with an energy and forces head for energy and forces prediction.

forward(input_ids, coords, forces=None, total_energy=None, formation_energy=None, has_formation_energy=None, attention_mask=None)[source]#

Forward function call for the structure to energy and forces model.

Return type:

Tuple[Tensor, Tuple[Tensor, Tensor, Optional[Tensor]]]

class Structure2Forces(config)[source]#

Bases: AtomformerPreTrainedModel

Atomformer with a forces head on top for forces prediction.

forward(input_ids, coords, forces=None, total_energy=None, formation_energy=None, has_formation_energy=None, attention_mask=None)[source]#

Forward function call for the structure to forces model.

Return type:

Tuple[Tensor, Tuple[Tensor, Optional[Tensor]]]

class Structure2TotalEnergyAndForces(config)[source]#

Bases: AtomformerPreTrainedModel

Atomformer with an energy and forces head for energy and forces prediction.

forward(input_ids, coords, forces=None, total_energy=None, formation_energy=None, has_formation_energy=None, attention_mask=None)[source]#

Forward function call for the structure to energy and forces model.

Return type:

Tuple[Optional[Tensor], Tuple[Tensor, Tensor, Optional[Tensor]]]