Iterative In-Context Learning to Enhance LLMs Abstract Reasoning: The Case-Study of Algebraic Tasks
This paper proposes an iterative in-context learning methodology that optimizes few-shot example selection to significantly enhance large language models' systematic generalization and reasoning capabilities on algebraic tasks with non-standard rules, revealing that simpler examples can sometimes outperform complex ones.