Skip to content

Commit 9c8c9c3

Browse files
committed
2 parents 18706af + 790c3ab commit 9c8c9c3

File tree

1 file changed

+9
-6
lines changed

1 file changed

+9
-6
lines changed

README.md

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@
2323

2424
## Overview
2525

26-
This repository provides a **from-scratch, modular PyTorch implementation of the core AlphaFold2 architecture**.
26+
This repository provides a **from-scratch, modular PyTorch implementation of the AlphaFold2 architecture**.
2727

2828
Where the original DeepMind release and frameworks like OpenFold are designed for large-scale production, this project is built for **architectural transparency, research experimentation, and real hands-on understanding**. It breaks the AlphaFold2 pipeline into inspectable, hackable components so researchers and students can study how Multiple Sequence Alignments (MSA), pair representations, Evoformer updates, and geometric heads interact at the tensor level.
2929

@@ -224,21 +224,21 @@ device = "cuda" if torch.cuda.is_available() else "cpu"
224224
# You need to download first the data
225225
dataset = FoldbenchProteinDataset(
226226
manifest_csv="data/showcase_manifest.csv",
227-
max_msa_seqs=128,)
227+
max_msa_seqs=128, crop_size=64, random_crop=True,)
228228

229229
loader = DataLoader(dataset, batch_size=1, shuffle=True, collate_fn=collate_proteins)
230230

231231
model = AlphaFold2(
232232
n_tokens=max(AA_VOCAB.values()) + 1,
233233
pad_idx=AA_VOCAB["-"],
234-
c_m=256,
235-
c_z=128,
236-
c_s=256,
237234
num_evoformer_blocks=2,
238235
num_structure_blocks=4,
239-
n_torsions=3).to(device)
236+
n_torsions=3, num_res_blocks_torsion=2,
237+
extra_msa_stack_enabled=True, template_stack_enabled=True,
238+
).to(device)
240239

241240
criterion = AlphaFoldLoss()
241+
242242
total_steps = 20 * len(loader)
243243
optimizer, scheduler = build_optimizer_and_scheduler(
244244
model=model,
@@ -264,12 +264,15 @@ result = train_alphafold2(
264264
scheduler=scheduler,
265265
ema=ema,
266266
scaler=amp_cfg["scaler"],
267+
grad_clip=1.0,
267268
device=device,
268269
epochs=20,
269270
amp_enabled=amp_cfg["amp_enabled"],
270271
amp_dtype=amp_cfg["amp_dtype_requested"],
271272
ideal_backbone_local=ideal_backbone_local,
272273
ckpt_dir="checkpoints_af2",
274+
num_recycles=3,
275+
stochastic_recycling=True,
273276
run_name="af2_poc")
274277

275278
```

0 commit comments

Comments
 (0)