For the Nipah Binder Challenge, we followed a systematic workflow combining structure-guided design, iterative hotspot refinement, and robust evaluation of ipSAE scores.
We began by generating an initial library of candidate binders using the standard BindCraft pipeline. To identify meaningful interaction regions on the Nipah glycoprotein, we visualized the receptor structure in PyMOL, mapping solvent-exposed and structurally plausible epitopes. Using distance, accessibility, and structural complementarity cues, we selected an initial set of putative hotspot residues and integrated them into BindCraft’s hotspot-guided design mode. From there, we iteratively refined these hotspots: after each design batch, we adjusted hotspot definitions, and re-generated new binder sets.
For evaluation, we implemented an automated scoring pipeline based on Boltz predictions and the official ipSAE script provided by the challenge. For each binder, we ran Boltz to generate structure and PAE outputs, which were then processed by the ipSAE pipeline to compute ipSAE_max and ipSAE_min, from which the challenge-relevant average can be derived. Since Boltz and ipSAE exhibit stochastic variation between runs, we assessed robustness by repeating the full evaluation five times per binder.
Using these repeated evaluations, we ranked candidates along two axes: (1) ipSAE_avg across runs, and (2) ipSAE_min across runs.
From an initial set of 206 designs, we first selected the top 25 under both metrics. After re-evaluating this reduced pool five additional times, we narrowed the selection to four binders that showed consistently strong and stable ipSAE scores across all runs. These binders have then again been re-evaluated five additional times, narrowing the selection further to the final two binders that been selected for the final submission.