LoFT: Low-Rank Adaptation That Behaves Like Full Fine-Tuning
The paper introduces LoFT, a novel parameter-efficient fine-tuning method that aligns optimizer dynamics (momentum and variance) with full fine-tuning within a low-rank subspace, thereby eliminating the need for hyperparameter tuning and achieving performance comparable to full fine-tuning without increasing inference costs.