Attn-QAT: 4-Bit Attention With Quantization-Aware Training
This paper introduces Attn-QAT, the first systematic 4-bit quantization-aware training framework for attention mechanisms that ensures stable FP4 training and inference by matching low-precision recomputation in the backward pass and correcting implicit precision assumptions, thereby eliminating quality drops and delivering up to 1.5x speedup on FP4-capable GPUs without relying on outlier-mitigation heuristics.