Rank-Factorized Implicit Neural Bias: Scaling Super-Resolution Transformer with FlashAttention
This paper proposes Rank-factorized Implicit Neural Bias (RIB), a novel positional bias mechanism that enables the use of hardware-efficient FlashAttention in Super-Resolution Transformers, allowing for significantly larger window sizes and training patches that achieve state-of-the-art performance (35.63 dB PSNR) while reducing training and inference times by 2.1 and 2.9, respectively.