atomgen.models.unimolplus module#
Implementation of the Uni-mol+ model with alterations to the original model.
- class GaussianLayer(k=128, edge_types=1024)[source]#
Bases:
Module
Gaussian pairwise positional embedding layer.
- class InitialStructure2RelaxedEnergy(config)[source]#
Bases:
TransformerPreTrainedModel
Transformer with an energy head on top for relaxed energy prediction.
- class InitialStructure2RelaxedStructure(config)[source]#
Bases:
TransformerPreTrainedModel
Transformer with an coordinate head on top for relaxed structure prediction.
- class InitialStructure2RelaxedStructureAndEnergy(config)[source]#
Bases:
TransformerPreTrainedModel
Transformer with an coordinate and energy head.
- class ParallelBlock(dim, num_heads, mlp_ratio=4, dropout=0.0, k=128, op_hidden_dim=16, tr_hidden_dim=16, gradient_checkpointing=False)[source]#
Bases:
Module
Parallel transformer block (MLP & Attention in parallel).
- Based on:
‘Scaling Vision Transformers to 22 Billion Parameters` - https://arxiv.org/abs/2302.05442
Adapted from TIMM implementation.
- class Structure2Energy(config)[source]#
Bases:
TransformerPreTrainedModel
Transformer with an atom modeling head on top for masked atom modeling.
- class Structure2EnergyAndForces(config)[source]#
Bases:
TransformerPreTrainedModel
Transformer with an energy and forces head for energy and forces prediction.
- class Structure2Forces(config)[source]#
Bases:
TransformerPreTrainedModel
Transformer with a forces head on top for forces prediction.
- class TransformerConfig(vocab_size=123, dim=768, num_heads=32, depth=12, mlp_ratio=1, k=128, op_hidden_dim=16, tr_hidden_dim=16, dropout=0.0, mask_token_id=0, pad_token_id=119, bos_token_id=120, eos_token_id=121, cls_token_id=122, **kwargs)[source]#
Bases:
PretrainedConfig
Configuration of a
UniMolPlusModel
.It is used to instantiate an UniMolPlus model according to the specified arguments.
- model_type: str = 'transformer'#
- class TransformerEncoder(config)[source]#
Bases:
Module
UniMolPlus Transformer encoder.
The transformer encoder consists of a series of parallel blocks, each containing a multi-head self-attention mechanism and a feed-forward network.
- class TransformerForCoordinateAM(config)[source]#
Bases:
TransformerPreTrainedModel
Transformer with an atom coordinate head on top for coordinate denoising.
- class TransformerForMaskedAM(config)[source]#
Bases:
TransformerPreTrainedModel
Transformer with an atom modeling head on top for masked atom modeling.
- class TransformerModel(config)[source]#
Bases:
TransformerPreTrainedModel
Transformer model for atom modeling.