The ligand atoms keep moving into formation, adjusting their positions iteratively.
3. Final Coordinates:
After several rounds, the model spits out the final 3D coordinates of the ligand atoms.
And there you have it, Dockformer in action!
The ligand atoms keep moving into formation, adjusting their positions iteratively.
3. Final Coordinates:
After several rounds, the model spits out the final 3D coordinates of the ligand atoms.
And there you have it, Dockformer in action!
Two types of attention layers come into play:
a. Intra-ligand attention: Helps the ligand atoms organize themselves correctly.
b. Ligand-protein cross-attention: Helps the ligand atoms adjust based on the protein’s pocket geometry.
Two types of attention layers come into play:
a. Intra-ligand attention: Helps the ligand atoms organize themselves correctly.
b. Ligand-protein cross-attention: Helps the ligand atoms adjust based on the protein’s pocket geometry.
Using all this information, we predict distance matrices:
a. How far should ligand atoms be from each other (intra)?
b. How far should ligand atoms be from protein atoms (inter)?
By now, Dockformer has a detailed understanding of how the ligand fits into the protein pocket.
Using all this information, we predict distance matrices:
a. How far should ligand atoms be from each other (intra)?
b. How far should ligand atoms be from protein atoms (inter)?
By now, Dockformer has a detailed understanding of how the ligand fits into the protein pocket.
The embeddings of the ligand and protein from the encoders are concatenated.
2. Binding Blocks:
These transformer layers refine combined representations, addressing:
a. Intra-ligand interactions
b. Ligand-protein interactions
The embeddings of the ligand and protein from the encoders are concatenated.
2. Binding Blocks:
These transformer layers refine combined representations, addressing:
a. Intra-ligand interactions
b. Ligand-protein interactions
Now comes attention mechanism, with a twist: pairwise info as bias. Each atom considers not just others but also their specific connections (e.g., distance, bond type).
Now comes attention mechanism, with a twist: pairwise info as bias. Each atom considers not just others but also their specific connections (e.g., distance, bond type).
Each atom’s features (identity + location) are transformed slightly to make them more model-friendly.
2. Pair Embedding Initialization:
Every pair of atoms gets initialized using 2D and 3D information.
Each atom’s features (identity + location) are transformed slightly to make them more model-friendly.
2. Pair Embedding Initialization:
Every pair of atoms gets initialized using 2D and 3D information.
1. Learnable Position Embedding (GPE): Tags each atom’s location using sine/cosine functions.
2. 3D Pair Features: Transforms inter-atomic distances via Gaussian kernels to help the model focus on various distance scales.
Next step!
1. Learnable Position Embedding (GPE): Tags each atom’s location using sine/cosine functions.
2. 3D Pair Features: Transforms inter-atomic distances via Gaussian kernels to help the model focus on various distance scales.
Next step!
This is the atom type of each atom in the protein and ligand. (one-hot encoding, obviously!)
2) 2D Graph Information (only for ligand, assuming proteins as rigid):
Here, shortest path distances and edge features are used.
3) 3D Geometric Information:
Next thread👇
This is the atom type of each atom in the protein and ligand. (one-hot encoding, obviously!)
2) 2D Graph Information (only for ligand, assuming proteins as rigid):
Here, shortest path distances and edge features are used.
3) 3D Geometric Information:
Next thread👇
Step 1: Getting to Know the Players
First, we introduce our protein and ligand to Dockformer. These are the two main characters in our story. To help the model understand them better, we describe them in three different ways:
Step 1: Getting to Know the Players
First, we introduce our protein and ligand to Dockformer. These are the two main characters in our story. To help the model understand them better, we describe them in three different ways: