RePo: Language Models with Context Re-Positioning
This paper introduces RePo, a novel mechanism that leverages a differentiable module to dynamically re-position tokens based on contextual dependencies rather than fixed linear indices, thereby reducing extraneous cognitive load and enhancing LLM performance on tasks involving noisy contexts, structured data, and long-range dependencies.