Beyond the Prompt in Large Language Models: Comprehension, In-Context Learning, and Chain-of-Thought
This paper provides a theoretical framework explaining how Large Language Models achieve semantic prompt comprehension, In-Context Learning, and Chain-of-Thought reasoning by inferring transition probabilities, reducing prompt ambiguity, and decomposing complex tasks into simpler sub-problems, respectively, thereby offering novel insights into the statistical superiority of advanced prompt engineering techniques.