In the previous AdaptyvBio competition I ranked #1 in silico, and then went 0-for-N in vitro. The failure modes were simple and painful:
A year later we launched a tech-bio venture focused on structural prediction, protein–receptor interaction simulation, and computational design of modulatory proteins. For this competition, we used that time to build a multi-engine design platform, including a modality-conditioned binder generator (supports VHHs, VNARs, knottins, DARPins, affibodies/Z-domains, etc.). But for this submission we deliberately constrained ourselves.
Constraints we imposed (on purpose)
To make this a realistic therapeutic-style exercise and not a metric chase:
About the low ipSAE (avg ~0.46) - why we still think these are worth expressing
ipSAE is a useful filter, but it’s also interface-stringent and penalizes uncertainty. In nanobody–target complexes, uncertainty often comes from exactly the thing that makes VHHs valuable: flexible CDR loops (especially CDR3) engaging flexible or shallow epitopes.
A low ipSAE can mean any of the following:
Critically: we did not tune to ipSAE, so this low number is not a “design failure” so much as an honest readout that the interface geometry is not sharply determined by the model.
That’s exactly the kind of situation where experimental expression + binding is informative, and exactly why community-vote expression slots matter.
Why vote for these designs?
If you vote for this set, you’re not voting for a claim that “the model guarantees binding.” You’re voting for an experiment with a clear rationale:
Will a developability-first, clinically grounded VHH strategy produce expressible proteins that have a real shot at binding—even when ipSAE is modest?
And conversely, how often does metric-optimized docking confidence translate into wet-lab success on this target?
Either outcome teaches the community something useful, and the designs themselves are engineered to maximize the chance we actually get protein expressed and testable.
If you want the expression slots to go to entries that are therapeutically plausible, conservative in sequence engineering, and designed to avoid metric overfitting, I’d appreciate your vote!
id: ivory-fox-ruby

Nipah Virus Glycoprotein G
0.69
78.95
--
14.3 kDa
128
id: strong-ibis-sand

Nipah Virus Glycoprotein G
0.58
83.52
--
14.1 kDa
128
id: vast-deer-topaz

Nipah Virus Glycoprotein G
0.47
81.73
--
14.5 kDa
128
id: quiet-ram-dust

Nipah Virus Glycoprotein G
0.30
79.98
--
14.5 kDa
128
id: azure-ox-onyx

Nipah Virus Glycoprotein G
0.24
80.24
--
14.3 kDa
128
id: swift-crane-jade

Nipah Virus Glycoprotein G
0.00
78.61
--
13.9 kDa
128