Stronger Enforcement of Instruction Hierarchy via Augmented Intermediate Representations
This paper proposes a novel defense against prompt injection attacks in large language models by augmenting intermediate token representations with layer-specific trainable embeddings to enforce instruction hierarchy, achieving a 1.6x to 9.2x reduction in attack success rates compared to state-of-the-art methods without compromising model utility.