Synthesizing Interpretable Control Policies through Large Language Model Guided Search
This paper proposes a novel method that leverages Large Language Models to evolve interpretable control policies represented as standard Python programs, offering a transparent and verifiable alternative to black-box neural network controllers for dynamical systems.