Fig. 2: The model architecture of ProRefiner.

A partial sequence, either given or constructed from a base model’s generation, and the backbone structure are encoded to obtain the graph features. Several memory-efficient global graph attention layers are employed to propagate the graph features and learn global residue interactions. Finally the whole sequence is generated in one shot.