atomgen.models.modeling_atomformer module#
Implementation of the Atomformer model.
- class AtomFormerForSystemClassification(config)[source]#
Bases:
AtomformerPreTrainedModel
Atomformer with a classification head for system classification.
- class AtomformerEncoder(config)[source]#
Bases:
Module
Atomformer encoder.
The transformer encoder consists of a series of parallel blocks, each containing a multi-head self-attention mechanism and a feed-forward network.
- class AtomformerForCoordinateAM(config)[source]#
Bases:
AtomformerPreTrainedModel
Atomformer with an atom coordinate head on top for coordinate denoising.
- class AtomformerForMaskedAM(config)[source]#
Bases:
AtomformerPreTrainedModel
Atomformer with an atom modeling head on top for masked atom modeling.
- class AtomformerModel(config)[source]#
Bases:
AtomformerPreTrainedModel
Atomformer model for atom modeling.
- class AtomformerPreTrainedModel(config, *inputs, **kwargs)[source]#
Bases:
PreTrainedModel
Base class for all transformer models.
- base_model_prefix = 'model'#
- config_class#
alias of
AtomformerConfig
- supports_gradient_checkpointing = True#
- class GaussianLayer(k=128, edge_types=1024)[source]#
Bases:
Module
Gaussian pairwise positional embedding layer.
This layer computes the Gaussian positional embeddings for the pairwise distances between atoms in a molecule.
Taken from: https://github.com/microsoft/Graphormer/blob/main/graphormer/models/graphormer_3d.py
- class InitialStructure2RelaxedEnergy(config)[source]#
Bases:
AtomformerPreTrainedModel
Atomformer with an energy head on top for relaxed energy prediction.
- class InitialStructure2RelaxedStructure(config)[source]#
Bases:
AtomformerPreTrainedModel
Atomformer with an coordinate head on top for relaxed structure prediction.
- class InitialStructure2RelaxedStructureAndEnergy(config)[source]#
Bases:
AtomformerPreTrainedModel
Atomformer with an coordinate and energy head.
- class ParallelBlock(dim, num_heads, mlp_ratio=4, dropout=0.0, k=128, gradient_checkpointing=False)[source]#
Bases:
Module
Parallel transformer block (MLP & Attention in parallel).
- Based on:
‘Scaling Vision Transformers to 22 Billion Parameters` - https://arxiv.org/abs/2302.05442
Adapted from TIMM implementation.
- class Structure2Energy(config)[source]#
Bases:
AtomformerPreTrainedModel
Atomformer with an atom modeling head on top for masked atom modeling.
- class Structure2EnergyAndForces(config)[source]#
Bases:
AtomformerPreTrainedModel
Atomformer with an energy and forces head for energy and forces prediction.
- class Structure2Forces(config)[source]#
Bases:
AtomformerPreTrainedModel
Atomformer with a forces head on top for forces prediction.
- class Structure2TotalEnergyAndForces(config)[source]#
Bases:
AtomformerPreTrainedModel
Atomformer with an energy and forces head for energy and forces prediction.